@doglatine's banner p

doglatine


				

				

				
20 followers   follows 2 users  
joined 2022 September 05 16:08:37 UTC

				

User ID: 619

doglatine


				
				
				

				
20 followers   follows 2 users   joined 2022 September 05 16:08:37 UTC

					

No bio...


					

User ID: 619

FWIW dude I really like your substack and it’s now one of the blogs I’m most happy to see updates for!

That much is implied by the very term 'Russophobia'. Otherwise it would just be called 'having an entirely rational and appropriate attitude to Russia'.

Amazing! Thank you.

Good response! Yes, I agree FPV drones are unlikely to be decisive in a naval war. Insofar as China's dominance in drones raises concerns about a US-China conflict, it's what it suggests about China's wider industrial dominance. I think the most plausible 'long war' scenario here involves China imposing a blockade/maritime exclusion zone around Taiwan, triggering an ongoing and gradually-escalating naval conflict with the US. I agree that submarines will likely be very important here, and I also agree that the US has a pretty significant edge here. Where I expect China to dominate is in anti-ship missiles and light combatants like the Houbei class which will effectively exclude the US Navy from the SCS.

It’s definitely — and explicitly — pro-Democratic party, and features calls for political donations. However, it also feels (to me) quite fresh and direct and pretty bold in its analysis.

For what it’s worth, as someone who loves “muscular liberalism”, many of my favourite parts of the Culture books are when you get to see its bared teeth (perhaps most spectacularly at the end of Look To Windward with the Terror Weapon). There’s a reason why all Involved species know the saying “Don’t Fuck With The Culture.” I fantasise about being part of a similarly open, liberal, and pluralistic society that is nonetheless utterly capable of extreme violence when its citizens’ lives and interests are threatened.

Just to say, this was the most interesting post I’ve read on the Motte for a long time, so thanks for sharing your experiences, very different from the typical fare here. In case anyone else is reading, I’d be similarly interested to hear from others whose identity and experiences give them insights that others may miss.

It also doesn’t make a ton of sense, especially given Trump’s line about how Biden hates her.

Yeah he fucked up that line. Looked canned and smug and insincere. By contrast his hits on Biden in the first debate looked like brutal honesty and really landed.

I agree with most of this, but I also think that the financialisation of many Western economies probably has exerted a significant toll on industrial state capacity. My suspicion is that the US couldn’t pull off the same feats it managed in WW2 or much of the Cold War because it simply doesn’t have enough welders, factories, machine shop operators, aeronautical engineers, stevedores, and so on.

Likewise, while I think the narrative that “we don’t build things any more” is largely false, we’ve certainly transitioned into building different kinds of thing, with an emphasis on bits over “its”.

I’m less sure about other forms of state capacity. While the US was able to enforce COVID rules fairly effectively, this doesn’t impress me much; largely the rules were about convincing people to refrain from doing certain things and enforcing this. It’s less clear to me that the US could, for example, mobilise an additional 10 million military personnel as it did over the course of WW2.

If I’m focusing on war scenarios here, it’s because the possibility of a war with China looms large here. While the opening days of any such war will draw on stockpiled munitions, in any prolonged conflict the US will be sorely tested in its ability to rapidly regenerate stockpiles and replace losses, especially of surface combatants.

I’m eager to have my pessimism here overruled, but there are times when the tide goes out and you realise which states have been swimming nude, and I worry the US isn’t wearing trunks.

You can limit the harms of people’s shit decisions and put barriers in place to deter them making the worst ones.

That said, I agree with the spirit of what you’re saying. I tend towards being maximally permissive about self-funded medical procedures by adults. From plastic surgery to suicide, as long as the state isn’t contributing a penny, you’re a compos mentis adult, and no-one else will be directly harmed, then I see no good reason for imposing any significant limits. Minors are obviously a very difficult case, and deserve greater protections.

I’m worried about the Philippines in particular. Lots of regional disputes (eg Spratley Islands) that would serve as testing grounds for the Chinese military but wouldn’t trigger full scale US involvement.

Yeah, I’ve seen some wild liveleak stuff in the past but the scene with the dog was enough to make me decide my brain really doesn’t need this content in it.

I don’t think it’s an insuperable problem. A difficult one to be sure, but academic incentive structures are a lot more mutable than a bunch of other social problems if you have the political will. There’s also the fact that the current blind review journal-based publishing system is on borrowed time thanks to advances in LLMs, so we’ll need to do a fair amount of innovating/rebuilding in the next decade anyway.

Top or bottom?

I’m glad anyone got it! Very much an imperfect analogy but it felt right somehow. /u/zeke5123 has the core of it — that Vivek will end up using Trump as a figurehead to advance his own ends and ambition. Maybe I’m overestimating Vivek and/or underestimating Trump, but for all his animal cunning, I still see some confused generous boomer in Donald, whereas Vivek is all 2nd gen migrant ambition and ruthlessness. There’s also the fact that Puzzle is vastly more virtuous than either of them, but as I say, it was mostly a vibes-based analogy.

There is no stealth in space.

Just FYI, and also this. Obviously the programmes are super classified so we don't know how stealthy the satellites are, but hiding stuff in space isn't impossible.

Good questions!

  1. Yes, absolutely. In fact I think people can hold full-blown beliefs that are in contradiction, although (unlike S-dispositions) this creates genuine cognitive dissonance.

  2. This is tricky because individuating beliefs contents is tricky. When an astrophysicist says "the sun is heavy" and a 10 year old child says "the sun is heavy", do they hold the same belief? In general, I'm inclined to be sloppy here and say it's a matter of fineness of grain; there's a more coarse-grained level at which the physicist and the child hold the same belief, and a fine-grained level at which they hold different beliefs. That said, I'm inclined to think that individuating S-dispositions should if anything be easier than individuating beliefs insofar as it's more closely linked to public behaviour and less linked to normatively-governed cognitive transitions (the kind of inferences you'd make, etc.). To be a bit more rigorous about it, I'd say two individuals A and B share an S-disposition P to the extent that (i) they are inclined to assert or deny P in the same social contexts, and (ii) do not integrate P with their broader cognitive states and behaviour in the manner characteristic of belief.

  3. Great question. A few simple rules of thumb. (i) As noted above, conflicting S-dispositions do not generate negative emotional affect in the same way that conflicting beliefs do (cognitive dissonance); (ii) S-dispositions are relatively behaviourally and inferentially inert, and do not play a significant role in people's lives even in cases where beliefs with the same content do (e.g., someone who pays lip-service to climate change narratives vs a true believer); (iii) S-dispositions are almost exclusively generated and sustained by social contexts, whereas beliefs can be frequently arrived at relatively independently (there are big social influences on beliefs of course, but the point is that there are only social influences on S-dispositions); (iv) individuals feel no real obligation or interest in updating S-dispositions as compared to beliefs, etc.. Applying these heuristics to oneself can help one distinguish the two.

  4. Again, a very good and interesting question, and one I'm still thinking about. I think the clearest causal arrow here runs from S-dispositions to beliefs: someone might adopt animal rights-related S-dispositions for social reasons, and subsequently go on to translate some of these into full blown beliefs. In the opposite direction, one could imagine a person's belief system being "hollowed out", so they assent and dissent from the same propositions but without any of the interest and commitment that they used to have; something like this can happen to religious people, for example, but distinguish those cases from instances where people genuinely 'lose their faith' and acquire full-blown atheist beliefs. More broadly, I expect there to be lots of interesting connections between the two.

Lots of great points here; let me respond to a few.

First and foremost, this seems absurdly difficult to measure rigorously.

Agreed, although this is a problem with most psychological and social states. There is a robust conceptual distinction between someone joking vs being sincere, but actually teasing that apart rigorously is going to be hard (and you certainly can't always rely on people's testimony). Instead, when it's really essential to make a call in these cases, we rely on a variety of heuristics. The point of my screed is not that I've found a great new psychometric technique, but rather an important conceptual distinction (that psychometric or legal heuristics could potentially be built around).

Maybe they really believe in climate change but they're just selfish and care more about their own convenience

Right, although that would generate predictions of its own (e.g., changing their behaviour immediately when the convenience factors changes). Hard to measure for sure, but not impossible (I think we do this all the time for lots of similar states).

Second, I think a lot of the perceived sparseness is availability bias... if you look at a broader and less interesting class of beliefs I expect you'd find 99%+ of beliefs are genuine

That's possibly true, but not hugely interesting except for framing purposes since "counting beliefs" is a messy endeavour in the first place. Perhaps my main thesis could be reframed as "a lot of things we are inclined to think of as being beliefs aren't actually best understood as beliefs but as a distinctive type of state." Moreover, any serious attempt to quantify the prevalence of S-dispositions vs beliefs is going to have to grapple with some messy distinctions between e.g. explicit beliefs that are immediately retrievable (my date of birth is XX/XX/XXXX) and implicit beliefs that are rapidly but non-immediately retrievable from other beliefs (Donald Trump is not a prime number).

Does this 60% belief count as "genuine?" And would your study be able to tell the difference between that and someone with a hypocritical professed 99% belief?

Again, this is messy in practice, but as long as we stick to the conceptual level it's fairly clear-cut, insofar as we'd expect different behaviour from a rational sincere Bayesian 60% believer vs a hypocritical 99% believer (consider, e.g., betting behaviour).

In theory something along the lines of your study, done extremely carefully, could be useful.

To be clear, this is theoretical psychology/philosophy of mind rather than policy recommendations, and any actual implemented policies would be several research projects downstream.

One simple way for twitter to monetise would be to charge for Blue Checkmarks. Maybe offer some power-tools in response (e.g., analytics). Maybe you could even charge different amounts for different tiers of Bluecheck.

Absolutely - the deterrent effect of a missile shield isn't to protect against a general nuclear war in which Russia, China, or the US decides to hit the big red button. Given the constraints of MAD, I'd like to think that no state would rationally launch a first strike at scale. The point of the shield is to prevent countries engaging in low-level nuclear bullying, or attempts to use nuclear weapons to gain a limited battlefield advantage. Existing MAD doctrine doesn't really cover these kinds of contingency: the US isn't going to nuke Moscow just because Russia uses a battlefield nuke against a Ukrainian airbase.

Eh, feels more like milquetoast centre-leftism to me. A giveaway to the middle classes. At least when I use the term "progressivism", I mean to refer specifically to the complex of identity politics movements.

You're talking about this passage?

Sometime around 2030, there are surprisingly widespread pro-democracy protests in China, and the CCP’s efforts to suppress them are sabotaged by its AI systems. The CCP’s worst fear has materialized: DeepCent-2 must have sold them out!

The protests cascade into a magnificently orchestrated, bloodless, and drone-assisted coup followed by democratic elections. The superintelligences on both sides of the Pacific had been planning this for years. Similar events play out in other countries, and more generally, geopolitical conflicts seem to die down or get resolved in favor of the US. Countries join a highly-federalized world government under United Nations branding but obvious US control.

What's your objection? I think this paragraph makes clear that this isn't really an organic phenomenon; it's humans being memetically hacked by AI systems. We're long past the the point in the story where they "are superhuman at everything, including persuasion, and have been integrated into their military and are giving advice to the government." And the Chinese AGI had been fully co-opted by the US AGI at that point, so it was serving US interests (as the paragraph above again makes clear).

I'd also flag that you're probably not the only (or even the main) audience for the story - it's aimed in large part at policy wonks in the US administration, and they care a lot about geopolitics and security issues. "Unaligned AGIs can sell out the country to foreign powers" is (perversely) a much easier sell to that audience than "Unaligned AGIs will kill everyone."

I assume AI.

Yes, thanks for the expectations-tempering, and agree that there could still be a reasonably long way still to go (my own timelines are still late-this-decade). I think the main lesson of o3 from the very little we've seen so far is probably to downgrade one family of arguments/possibilities, namely the idea that all the low-hanging fruit in the current AI paradigm had been taken and we shouldn't expect any more leaps on the scale of GPT3.5->GPT4. I know some friends in this space who were pretty confident that Transformer architectures wouldn't never be able to get good scores on the ARC AGI challenges, for example, and we'd need a comprehensive rethink of foundations. What o3 seems to suggest is that these people are wrong, and existing methods should be able to get us most (if not all) the way to AGI.