@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Because it doesn't have to be 'widespread' to have a significant effect on outcomes. Even accounting for how ambiguous that term is. If 50,000 fraudulent votes are cast in one precinct, that might not count since it wasn't taking place elsewhere?

I am not aware of anyone pointing out 50 fraudulent votes within a single district, let alone 50,000. If something like 50,000 in a single district was something that had actually been shown to have happened, that argument would be a lot more relevant. Particularly if those 50,000 fraudulent votes came from individual people who should not have been allowed to vote individually deciding to vote.

Basically my issue with this is the type of fraudulent vote they're going after here isn't the type of fraud that I would expect to swing elections.

Of course, the Dems spent years alleging Russian 'interference' with the 2016 election despite no direct evidence, so I also don't think they've demonstrated good faith on the issue anyway.

Agreed.

Honestly I feel like all the talk of fraud is a distraction from things that are legal but have significant effects on voter turnout (e.g. polling place locations, canvassing, changing laws around mail-in ballots, etc).

The reality is: vegetables suck, you just have to eat them

I recognize the higher-level point you're making, and I think it's a valid point, but on the object level I think you might need a steamer or an air fryer. If your experience is that vegetables suck, you may get a lot of mileage out of figuring out ways to cook them that you actually like.

If I have the choice between a bag of doritos or a bowl of lightly steamed broccoli with lemon, pepper, and a sprinkle of msg, I'll generally take the broccoli (assuming both are already prepared). As snacks go, chips are cheaper, and much more convenient, and much easier to mindlessly eat with one hand while doing something else, but I don't think I actually experience more enjoyment while eating chips than I do while eating vegetables that I cooked according to my own preferences.

Then, moreover, they know that there have been many high-profile instances of products shipping, having an interface exposed that is trivially-attackable, and when it's attacked, the manufacturers ignore it and just say some bullshit about how it was supposed to just be for the manufacturer for debugging purposes, so they're not responsible and not going to do anything about it.

Was "lol we didn't mean to leave that exposed" a get-out-of-liability-free card by UK laws before this guidance came out? If so, I can see why you'd want this. If not, I'd say the issue probably wasn't "not enough rules" but rather "not enough enforcement of existing rules" and I don't expect "add more rules" to be very useful in such a case, and I especially don't expect that to be true of rules that look like "you are legally required to use your best judgement".

It's a bullshit thing by bad entity manufacturers who don't care.

I agree, but I don't think it's possible to legally compel companies to thoughtfully consider the best interests of their users.

Honestly, I probably would have not done as good of a job if I had tried to put this set of ideas together from scratch myself.

Neither would I. My point wasn't "the legislators are bad at their job", it was "it's actually really really hard to write good rules, and frequently having bad explicit rules is worse than having no explicit rules beyond 'you are liable for harm you cause through your negligence'".

Again, that interpretation is nice if correct. Can you point to anything in the document which supports the interpretation that saying "We have assessed that leaving this debug interface provides user benefit because the debug interface allows the user to debug" would actually be sufficient justification?

My mental model is "it's probably fine if you know people at the regulatory agency, and probably fine if you don't attract any regulatory scrutiny, and likely not to be fine if the regulator hates your guts and wants to make an example of you, or if the regulator's golf buddy is an executive at your competitor". If your legal team approves it, I expect it to be on the basis of "the regulator has not historically gone after anyone who put anything even vaguely plausible down in one of these, so just put down something vaguely plausible and we'll be fine unless the regulator has it out for us specifically". But if anything goes as long as it's a vaguely plausible answer at something resembling the question on the form, and as long as it's not a blatant lie about your product where you provably know that you're lying, I don't expect that to help very much with IoT security.

And yes, I get that "the regulator won't look at you unless something goes wrong, and if something does go wrong they'll look through your practices until they find something they don't like" is how most things work. But I think that's a bad thing and the relative rarity of that sort of thing in tech is why tech is one of the few remaining productive and relatively-pleasant-to-work-in industries. You obviously do sometimes need regulation, but I think in a lot of cases, probably including this one, the rules that are already on the books would be sufficient if they were consistently enforced, but they are in fact rarely enforced and the conclusion people come to is "the current regulations aren't working and so we need to add more regulations" rather than "we should try being more consistent at sticking to the rules that are already on the books", and so you end up with even more vague regulations that most companies make token attempts to cover their asses on but otherwise ignore, and so you end up in a state where full compliance with the rules is impractical but also generally not expected, until someone pisses off a regulator at which point their behavior becomes retroactively unacceptable.

Edit: As a concrete example of the broader thing I'm pointing at, HIPAA is an extremely strict standard, and yet in practice hospital systems are often laughably insecure. Adding even more requirements on top of HIPAA would not help.

That's a nice legal theory you have there.

Let's say you're an engineer at one such company, and you want to expose a UART serial interface to allow the device you're selling to be debuggable and modifiable for the subset of end-users who know what they're doing. You say "this is part of the consumer-facing functionality". The regulator comes back and says "ok, where's the documentation for that consumer-facing functionality" and you say "we're not allowed to share that due to NDAs, but honest this completely undocumented interface is part of the intended consumer-facing functionality".

How do you expect that to go over with the regulator? Before that, how do you expect the conversation with the legal department at your company to go when you tell them that's your plan for what to tell the regulator if they ask?

I present to you: nobody.

... I see a lot of you arguing that The_Nybbler believes that giving an inch here is a bad idea because they think that a tiny regulation will directly kill innovation, while The_Nybbler is arguing that there's no particular reason for the regulators who introduced this legislation to stop at only implementing useful regulations that pass cost-benefit analysis, and that the other industries we see do seem to have vastly overreaching regulators, and so a naive cost-benefit analysis on a marginal regulation which does not factor in the likely-much-larger second-order effects is useless (though @The_Nybbler do correct me if I'm wrong about this, and you think introducing regulation would be bad even if the first-order effects of regulation were positive and there was some actually-credible way of ensuring that the scope of the regulation was strictly limited).

Honestly I think both of you could stand to focus a bit more on explaining your own positions and less on arguing against what you believe the other means, because as it stands it looks to me like a bunch of statements about what the other person believes, like "you argue that the first-order effects of the most defensible part of this regulation are bad, but you can't support that" / "well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case.

Quoting the examples:

Example 1: The root keys involved in authorization and access to licensed radio frequencies (e.g. LTE-m cellular access) are stored in a UICC.

Ok, fair enough, I can see why you would want to prevent users from accessing these particular secrets on the device they own (because, in a sense, they don't own this particular bit). Though I contend that the main "security" benefit of these is fear of being legally slapped around under CFAA.

Example 2: A remote controlled door-lock using a Trusted Execution Environment (TEE) to store and access the sensitive security parameters.

Seems kinda pointless. If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house. And if a remote attacker has the ability to read the flash storage because they have gained the ability to execute arbitrary code, they can presumably just directly send the outputs which unlock the door without mucking about with the secrets at all.

Example 3: A wireless thermostat stores the credentials for the wireless network in a tamper protected microcontroller rather than in external flash storage.

What's the threat model we're mitigating here, such that the benefit of mitigating that threat is worth the monetary and complexity cost of requiring an extra component on e.g. every single adjustable-color light bulb sold?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

On examination, I misread, and you are correct about what the documents says.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money." That seems worse, not better. What's the threat model this is supposed to be defending against? Is this a good way of defending against this threat model?

I think it's good old issue #594 back from the dead.

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Nobody is arguing that "the moment any regulation is in place, it is inevitable that we will slide all the way down the slippery slope of increasing regulation and all innovation in that industry will die". The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope. That chance may be worth it, if the step is small and the benefit of the regulation is large, but in the case of the entirety of ETSI EN 303 645 (not just section 5.1 in isolation), I don't think that's the case, and I certainly don't think it's a slam-dunk that it's worth the cost.

Section 5.1, "You are not allowed to use a default password on an network interface as the sole means of authorization for the administrative functions of an IoT device", if well-implemented, is probably such a high-benefit low-risk regulation.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

They double down on the idea that manufacturers can make something robust to physical access in section 5.4.2, "where a hard-coded unique per device identity is used in a device for security purposes, it shall be implemented in such a way that it resists tampering by means such as physical, electrical or software."

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port? I would guess not, but I'd also guess that if I was in the UK the legal team at my company would be very unhappy if I just went with my guess without consulting them.

And that's really the crux of the issue, introducing regulation like this means that companies now have to make a choice between exposing themselves to legal risks, making dumb development decisions based on the most conservative possible interpretation of the law, or involve the legal department way more frequently for development decisions.

Yes, because the baseline for "randomly guessing" is 1/5612 ("twitter user @fluffyporcupine matches this specific one of the 5612 facebook users"), not 1/2 ("twitter user @fluffyporcupine is/is not the same user as facebook user Nancy Prickles").

Doesn't scare me for personal reasons -- I'm trivially identifiable, you don't need to resort to fancy ML techniques. But if you're actually trying to remain anonymous, and post under both your real name and a pseudonym, then perhaps it's worth paying attention to (e.g. spinning up separate throwaway accounts for anything you want to say that is likely to actually lead to significant damage to your real-world identity and doing the "translate to and from a foreign language to get changed phrasing without changed meaning" thing).

I think the guideline should be "the topic keeps coming up over and over again in the threads for separate weeks, and the conversation in the new week tends to reference the conversation in older weeks". Covid, when it was a thing, absolutely qualified as that. Russia/Ukraine and Israel/Palestine were somewhat less of this, since each week's thread tended to be about current events more than about continuing to hash out an ongoing disagreement. Trans stuff, I think, qualifies for this, as it does seem to be the same people having the same discussion over and over. Can't think of too many other examples.

Don't pin it and I think it's fine. The people who want to have that discussion can subscribe to the thread. A second such containment thread for rationalist inner-circle social drama would also be nice. Maybe a third for trans stuff.

I think "topics that tend to suck all the air out of the room when they get brought up go to their own containment thread, anyone who cares to discuss that topic can subscribe to the thread, containment threads only get pinned if there's at least a quarter as much past-activity in them as in the typical CW thread" would probably be an improvement.

TBH if someone is put off by the fact that holocaust denial stuff gets put in a dedicated thread rather than banned I think they would probably be put off by the speech norms here anyway, best that they discover that early. I personally find the discussion tiresome and poorly argued, but I don't think there's a low-time-investment way to moderate on that basis, at least not yet. Maybe check back in 3 years and LLMs will be at that point, but for the time being.

All that said, I am not a mod, nor am I volunteering to spend the amount of time it would take to be a mod, so ultimately the decision should be made by the people who are putting in that time and effort.

Congratulations!

In terms of why I'm not so active it's mostly the "had a kid 2 months ago" thing, not anything to do with Motte quality.

You'll be happy to know that I did in fact throw some fairly substantial amounts of money at jefftk and friends for their wastewater surveillance / sequencing / anomaly detection project. Significantly prompted by us having this conversation.

I am not arguing that you can't get a single standard deviation of gain using gene editing, and I am especially not arguing that you can't get there eventually using an iterative approach. I am arguing that you will get less than +1SD of gain (and, in fact, probably a reduction) in intelligence if you follow the specific procedure of

  1. Catalogue all of the different genes where different alleles are correlated with differences in the observed phenotypic trait of interest (in this case intelligence)
  2. Determine the "best" allele for every single gene, and edit the genome accordingly at all of those places.
  3. Hopefully have a 300-IQ super baby.

The specific thing I want to call out is that each of the alleles you've measured to be the "better" variant is the better variant in the context of the environment the measurements occurred in. If you change a bunch of them at once, though, you're going to end up in into a completely different region of genome space, where the genotype/phenotype correlations you found in the normal human distribution probably won't hold.

I don't know if you have any experience with training ML models. I imagine not, since most people don't. Still, if you do have such experience, you can read my point as "if you take some policy that has been somewhat optimized by gradient descent for a loss function which is different from, but correlated with, the one you care about, and calculate the local gradient according to the loss function you care about, and then you take a giant step in the direction of the gradient you calculated, you are going to end up with higher loss even according to the loss function you care about, because the loss landscape is not flat". Basically my point is "going far out of distribution probably won't help you, even if you choose the direction that is optimal in-distribution -- you need to iterate".

Actually waiting for gene edited baby to grow is slow, and illegal

Yep. And yet, I claim, necessary if you don't want to be limited to fairly small gains.

Arguing that than it would break well before 1 SD, is... just wishful thinking. There's still a lot of low hanging fruit.

Note that this is "below 1SD of gains beyond what you would expect from the parents, and in a single iteration". If you were to take e.g. Terry Tao's genome, and then identify 30 places where he has "low intelligence" variants of whatever gene, and then make a clone with only those genes edited, and a second clone with no gene edits, I would expect the CRISPR clone to be a bit smarter than the unaltered clone, and many SD smarter than the average person. And, of course, at the extreme, if you take a zygote from two average-IQ parents, and replace its genome with Tao's genome then the resulting child would probably be more than 1SD smarter than you'd expect based on the IQs of the parents, because in that case you're choosing a known place in genome space to jump to, instead of choosing a direction and jumping really far in that direction from a mediocre place.

Maybe technical arguments don't belong in the CW thread, but people assuming that the loss landscape is basically a single smooth basin is a pet peeve of mine.

I thought it was claimed by the birds this year?

Why would you assume "aliens" not "previous Earth civilization" in that case?

Sorry for the slow reply, there's a bit to address.

Exactly. My goal is to investigate how exactly that happens. How we reason, how evidence works on us, how we draw conclusions and form beliefs.

Yeah, I like to think about this too. My impression is that there are two main ways that people come to form beliefs, in the sense of models of the world that produce predictions. Some people may lean more towards one way or the other, but most people are capable of changing their mind in either way in certain circumstances.

The first is through direct experience. For example, most people are not born knowing that if you take a cup of liquid in a short fat glass, and pour it into a tall skinny glass, that the amount of liquid remains the same despite the tall skinny glass looking like it has more liquid. The way people become convinced of this kind of object permanence is just by playing with liquids until they develop an intuitive understanding of the dynamics involved.

The second is by developing a model of other people's models, and querying that model to generate predictions as needed. This is how you end up with people who think things like "investing in real estate is the path to a prosperous life" despite not being particularly financially literate, nor having any personal experience with investing in real estate -- the successful people invest in real estate and talk about their successes, and so the financially illiterate person will predict good outcomes of pursuing that strategy despite not being able to give any specifics in terms of by what concrete mechanism that strategy should be expected to be successful. As a side note, expect it to be super frustrating to argue with someone about a belief they have picked up in this way -- you can argue till the cows come home about how some specific mechanism doesn't apply, but they weren't convinced by that mechanism, they were convinced by that one smart person they know believing something like this.

For the first type of belief, I definitely don't consider there to be any element of choice in what you expect your future observations to be based on your intuited understanding of the dynamics of the system. I cannot consciously decide not to believe in object permanence. For the second type of belief, I could see a case being made that you can decide which people's models to download into your brain, and which ones to trust. To an extent I think this is an accurate model, but I think if you trust the predictions generated by (your model of) someone else's model and are burned by that decision enough times, you will stop trusting the predictions of that model, same as you would if it was your own model.

There are intermediate cases, and perhaps it's better to treat this as a spectrum rather than a binary classification, and perhaps there are additional axes that would capture even more of the variation. But that's basically how I think about the acquisition of beliefs.

Incidentally I think "logical deduction generally works as a strategy for predicting stuff in the real world" tends to be a belief of the first type, generated by trying that strategy a bunch and having it work. It will only work in specific situations, and people who hold that kind of belief will have some pretty complex and nuanced ideas of when exactly that strategy will and won't work, in much the same way that embodied humans actually have some pretty complex and nuanced ideas about what exactly it means for objects to be permanent. I notice "trust logical deduction and math" tends to be a more widespread belief among mathematicians and physicists, and a much less widespread belief among biologists and doctors, so I think the usefulness of that heuristic varies a lot based on your context.

We reason based on data.

When we take data in, we can accept it uncritically, and promptly form a belief. This is a choice.

Interesting. This is not really how I would describe my internal experience. I would describe my experience as something more like "when I take data in, I note the data that I am seeing. I maybe form some weak rudimentary model of what might have caused me to observe the thing I saw, if I'm in peak form I might produce more than one (i.e. two, it's never more than two in practice) competing models that both might explain that model. If my model does badly, I don't trust it very well, whereas if it does well over time I adopt the idea that the model is true as a belief".

But anyway, this might all be esoteric bullshit. I'm a programmer, not a philosopher. Let's move back to the object level.

One of the bedrock parts of Materialism is that effects have causes.

Ehhh. Mostly true, at least. True in cases where there's an arrow of time that points from low-entropy systems to high-entropy systems, at least, which describes the world we live in and as such is probably good enough for the conversation at hand (see this excellent Wolfram article for nuance, though, if you're interested in such things -- look particularly at the section titled "Reversibility, Irreversibility and Equilibrium" for a demonstration that "the direction of causality" is "the direction pointing from low entropy to high entropy, even in systems that are reversible").

Therefore, under Materialist assumptions, the Big Bang has a cause.

Seems likely to me, at least in the sense of "the entropy at the moment of the Big Bang was not literally zero, nor was it maximal, so there was likely some other comprehensible thing going on".

We have no way of observing that cause, nor of testing theories about it. If we did, we'd need a cause for that cause, and so on, in a potentially-infinite regress

I think if we managed to get back to either zero entropy or infinite entropy we wouldn't need to keep regressing. But as far as I know we haven't actually gotten there with anything resembling a solid theory.

So, one might nominate three competing models:

• The cause is a seamless physics loop, part of which is hidden behind the back wall. • the universe is actually a simulation, and the real universe it's being simulated in is behind the back wall. • One or more of the deists are right, and it's some creator divinity behind the back wall.

I'd nominate a fourth hypothesis "the big bang is the point where, if you trace the chains of causality back past it, entropy starts going back up instead of down. time is defined as the direction away from the big bang" (see above wolfram article). In any case, the question "but can we chase back the chain of causality further somehow, what imbues some mathematical object with the fire of existence" still feels salient, at least (though maybe it's just a nonsense question?)

In any case, I am with you that none of these hypotheses make particularly useful or testable predictions.

But yeah, anyone claiming that materialism is complete in the way you are looking for is, I think, wrong. For that matter, I think anyone claiming the same of deism is wrong.

It is common here to encounter people who claim the human mind is something akin to deterministic clockwork, and therefore free will can't exist

I think those people are wrong. I think free will is what making a decision feels like from the inside -- just because some theoretical omniscient entity could in theory predict what your decision will be before you know what your decision is doesn't mean you know what that decision would be ahead of time. If predictive ML models get really good, and also EEGs get really good, and we set up an experiment wherein you choose when to press a button, and a computer can reliably predict 500ms faster than you that you will press the button, I don't think that experiment would disprove free will. If you were to close the loop and light up a light whenever the machine predicts the button would be pressed, a person could just be contrary and not press the button when the light turns on, and press the button when the light is off (because the human reaction time of 200ms is less than the 500ms standard we're holding the machine to). I think that's a pretty reasonable operationalization of the "I could choose otherwise" observation that underlies our conviction that we have free will. IIRC this is a fairly standard position called "compatibilism" though I don't think I've ever read any of the officially endorsed literature.

That said, in my personal experience "internally predict that this outcome will be the one I observe" does not feel like a "choice" in the way that "press the button" vs "don't press the button" feels like a choice. And it's that observation that I keep coming back to.

Finally, we can adopt an axiom. Axioms are not evidence, and they are not supported by evidence; rather, evidence either fits into them or it doesn't. We use axioms to group and collate evidence. Axioms are beliefs, and they cannot be forced, only chosen, though evidence we've accepted as valid that doesn't fit into them must be discarded or otherwise handled in some other way. This, again, is a choice.

This might just be a difference in vocabulary -- what you're calling "axioms" I'm calling "models" or "hypotheses", because "axiom" implies to me that it's the sort of thing where if you get conflicting evidence you have to throw away the evidence, rather than having the option of throwing away the "axiom". Maybe you mean something different by "choice" than I do as well.

Primarily, the belief that one's other beliefs are not chosen but forced seems to make them more susceptible to accepting other beliefs uncritically, resulting in our history of "scientific" movements and ideologies that were not in any meaningful sense scientific, but which were very good at assembling huge piles of human skulls. Other implications branch out into politics, the nature of liberty and democracy, the proper understanding of values, how we should approach conflict, and so on, but these are beyond the scope of this margin. I've just hit 10k characters and have already had to rewrite half this post once, so I'll leave it here.

If we're going by "stated beliefs" rather than "anticipatory beliefs" I just flatly agree with this.

In conclusion, I'm pretty sure this is all the Enlightenment's fault.

That pattern of misbehavior happened before the enlightenment too though. And, on balance, I think the enlightenment in general, and the scientific way of thinking in particular, left us with a world I'd much rather live in than the pre-enlightenment world. I will end with this graph of life expectancy at birth over time.

"other people are actually just p zombies behaving as if they are conscious like me" generates predictions that are just as good

I genuinely don't think it does. Unless you mean "believing" that in the classic "invisible dragon in my garage" sense, which I don't count as actually belief. Rule of thumb - if you're preemptively coming up with excuses for why your future observations will not support your theory over competing theories, or why your theory actually predicts exactly the same thing that the classic theory predicts and the only differences are in something unfalsifiable, that should be a giant red flag for your theory.

For example: I think that my experience of consciousness is caused by specific physical things my nervous system does sometimes. If I slap some electrodes on my scalp to take an electroencephalogram, and then do some task that involves introspecting, making conscious decisions, and describing those experiences, I expect that I will see particular patterns of electrical activity in my brain any time I make a conscious decision. I expect that the EEG readouts from other people would have similar patterns.

For the p-zombie explanation to make sense, we would either have to say that my experience of consciousness and the things I said about it were not caused by things happening in my nervous system, or we would have to say that those patterns in my nervous system and the way I described my experience were related to my consciousness, but in other people there was something else going on which just happened to have indistinguishable results. And also we would predict in advance that any time we try to use the "p-zombie" hypothesis what we actually end up doing is going "what do we predict in the world where other people's consciousness works the same way as mine" and then saying "the p-zombie hypothesis says the same thing" -- the p-zombie hypothesis does not actually predict anything on its own.

That's a way better life than actually being pro social all the time.

As an empirical matter, I think that if you try rating your internal subjective experience after ripping off a stranger who gets angry at you but who you'll never see again vs your internal subjective experience after helping a stranger who expresses gratitude but you'll never see again, you may be surprised at which one results in higher subjective well-being. That doesn't really have any bearing on the factual questions of other peoples' internal experiences, just a prediction I have about what your own internal experience will be like.

I think that's a very pragmatic and reasonable position, at least in the abstract. You're in great intellectual company, holding that set of beliefs. Just look at all of the sayings that agree!

  • You can't reason someone out of something they didn't reason themselves into
  • It is difficult to get a man to understand something, when his salary depends on his not understanding it
  • We don't see things as they are, we see them as we are
  • It's easier to fool people than to convince them that they have been fooled

And yet! Some people do change their mind in response to evidence. It's not everyone, it might not even be most people, but it is a thing that happens. Clearly something is going on there.

We are in the culture war thread, so let's wage some culture war. Very early in this thread, you made the argument

What does replacing the Big Bang with God lose out on? Both of them share the attribute of serving as a termination point for materialistic explanations. Anything posited past that point is unfalsifiable by definition, unless something pretty significant changes in terms of our understanding of physics.

What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works". A "gears-level" model should at least look like

  1. If the model were falsified, there should be specific changes to what future experiences you anticipate (or at the very least, you should lose confidence in some specific predictions you had before)
  2. Take the components of your model. If you take one of those parts, and you make some large arbitrary change to it, the model should now make completely different (and probably wrong, and maybe inconsistent) predictions.
  3. If you forgot a piece of your model, could you rederive it based on the other pieces of the model?

So I think the standard model of physics mostly satisfies the above. Working through:

  1. If general relativity were falsified, we'd expect that e.g. the predictions it makes about the precession of Mercury would be inaccurate enough that we would notice. Let's take the cosmological constant Λ in the Einstein Field Equation, which represents the energy density of vacuum, and means that on large enough scales, there is a repulsive force that overpowers the attractive force of gravity.
  2. If we were to, for example, flip the sign, we would expect the universe to be expanding at a decreasing rate rather than an increasing rate (affecting e.g. how redshifted/blueshifted distant standard candles were).
  3. If you forget one physics equation, but remember all the others, it's pretty easy to rederive the missing one. Source: I have done that on exams when I forgot an equation.

Side note: the Big Bang does not really occupy a God-shaped space in the materialist ontology. I can see where there would be a temptation to view it that way - the Big Bang was the earliest observable event in our universe, and therefore can be viewed as the cause of everything else, just like God - but the Big Bang is a prediction (retrodiction?) that is generated by using the standard model to make sense of our observations (e.g. the redshifting of standard candles, the cosmic microwave background). The question isn't "what if we replace the Big Bang with God", but rather "what if we replace the entire materialist edifice with God".

In any case, let's apply the above tests to the "God" hypothesis.

  1. What would it even mean for the hypothesis "we exist because an omnipotent, omniscient, omnibenevolent God willed it" to be falsified? What differences would you expect to observe (even in principle)
  2. Let's say we flip around the "onmiscient" part of the above - God is now omnipotent and omnibenevolent. What changes?
  3. Oops, you forgot something about God. Can you rederive it based on what you already know?

My point here isn't really "religion bad" so much as "you genuinely do lose something valuable if you try to use God as an explanation".

I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term. I'm not aware of that term having any particular meaning in any philosophical tradition I know about, but I also don't know much about philosophy.

He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you? Is this another one of those less-than-universal human experiences similar to how some people just don't have mental imagery?

The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction

I don't think I would classify probabilistic approaches like that as "deterministic models of reason".

But yeah I'm starting to lean towards "there's literally some bit of mental machinery for intentionally believing something that some people have".

Yeah. I think there's a certain baseline level of trust required for democracy to work. I doubt "one state solution, and that one state is a democracy, and they vote on what should happen to the minority, what could go wrong" is a good solution.

Though a good solution may just not exist.

My point with the horse/ weasel analogy was that Israel is strong enough militarily that attacks against it are likely to make it angry but probably not cause enough damage to weaken it. "If they vote on dinner the horse will be fine" was not intended as advocacy for a one state democratic solution.

I posted an initial call for hypotheses to LW, got a couple of good ones, including "the SL policy network is acting as a crude estimator of the relative expected utility of exploring this region of the game tree" which strikes me as both plausible and also falsifiable.

I'll keep you posted.

Once somebody can figure out a rigid procedure that, when followed, causes Accenture presales engineers to write robust working code that actually meets the criteria, that procedure can be ported to work with LLMs. The procedure in question can be quite expensive with real people, because LLMs are cheap.

I suspect there does exist some approximate solution for the above, but also I expect it'll end up looking like some unholy combination of test-driven development, acceptance testing, mutation testing, and checking that the tests actually test for meeting the business logic requirements (and that last one is probably harder than all the other ones combined). And it will take trying and iterating on thousands of different approaches to find one that works, and the working approach will likely not work in all contexts.

... can you give up development rights to all of the land except the little patches you actually want to put windmills on?