My summary of your claim is: "Trump did great diplomacy in 2017-2018 and this resulted in subsequently less military provocations from North Korea."
But I don't think that holds because there have still been lots of provocations. My semi-insider understanding is they are far more in number and severity than before. For example:
- There continued to be major missile tests yearly until 2023, and in 2022 they flew a missile over Japan.
- In 2022, a North Korean drone got within 2 miles of the Blue House (where their president lives). This type of drone is more like a cruise missile than a quadcopter.
- In 2024, the North has officially abandoned a policy of reunification with the South and there's been all sorts of major border skirmishes. In 2024, the North launched artillery into the South.
- The North has been sending troops to fight in Ukraine and sending supplies to the Russians.
If you are seeing less provocations in the news, I think that's just your media diet.
The man argued with me that Trump's North Korea policy was disastrous, and I remember what North Korea was like before when they were launching missile tests every few months and nobody could solve that perpetual crisis.
NK tested both a nuclear weapon and a missile in Trump's first term, and there's a whole wikipedia article on the 2017-2018 nuclear crisis that has many wonks thinking we were on the brink of WW3. NK cyber operations have skyrocketed and is blamed for 100s of billions in theft and like half of the hacks that make it to the media.
Nobody on either side of the aisle thinks the perpetual crisis has been "solved".
The head of security research at Anthropic recently gave a nice talk at unprompted (a security meets AI conference). He walks through how simple it was to find exploits in the linux kernel and a famous web app and shows actual examples of the claude command he ran to generate these exploits. It's quite accessible (if you have any programming background at all, you can understand everything), and a more fun watch than the anthropic blog posts.
You can find the video at: https://youtube.com/watch?v=1sd26pWhfmg.
This is only true in the darkest of gray markets. In the white-hat arena that Anthropic would be forced to bargain in, these exploits go for 10s of thousands.
The military is of course willing to pay black-market rates, but Anthropic kinda burnt that bridge... and I'd be honestly pretty surprised if In-Q-Tel (famous CIA front company) starts investing in Anthropic...
but what precisely is the intuition that is being violated here?
The answer is I don't know, and there might not be. But historically:
- Every famous mathematician of the 19th century (Cauchy, Ampere, Dirichlet, Riemann) made serious mistakes "proving" false theorems by making analogies between continuous and discrete functions.
- Lots of catastrophic engineering failures have root causes in assuming that approximations are better than they are. A timely example is that Patriot missile systems failed in the 1991 Desert Storm because of an approximation where the coders used 0.1 seconds, but that has no exact representation in binary.
So when you talk about your approximations being "good enough" without any effort to justify why, it rings all of the alarm bells.
I suspect that any efforts you spend to clarify exactly what it means for an approximation of yourself to be good enough will result in:
- very clearly articulating the difference in belief that you have with other people,
- providing concrete empirical tests that can be used to clarify/change people's moral intuitions.
I do have concrete examples in mind of what this would look like, but unfortunately don't have the time/skill to type them out. It's the sort of thing that I would talk about with other bored mathematicians at conferences over beer with a whiteboard and a lot of weird, technical pictures.
though I'm not sure what doesn't make sense about it?
"floating point accuracy" is the accuracy possible with a certain number of bits. As soon as you say that you have "8-bit" numbers, that immediately defines what floating point accuracy is. And so every 8-bit model has 8-bit floating point accuracy and can never possibly have 64-bit floating point accuracy.
I am closer, right now, to the person I was a second ago than the person I was a week ago, or the person I'll be next month. This is fine. This is entirely unremarkable, and taken for granted by just about everybody who wasn't hit by a bus in the interim. But the point is that I consider this grounds to accept (bounded) deviations from ground truth in a subsequent digital copy as not a particularly big deal. If someone demands something even closer? Well, that's their prerogative.
This is one of those spots where again I think mathematical formalism makes the distinction clear. The function you are describing above is continuous (and probably differentiable, and probably has other nice regularity conditions as well, but we don't need to assume those for the sake of argument here). The earring/copy phenomenon is clearly not continuous. Intuitions about continuous functions very rarely apply to non-continuous functions.
Everything I wrote is true for rationals as well.
This is an overall reasonable response.
In other words, you're conflating exact representation with sufficient representation, which is what I care about, and which is significantly more tractable.
What I originally started trying to write (and admittedly got sidetracked because it was taking too long) is that I think this computational complexity framework can provide a way to understand the disagreements you have with other people. My idea is that you are willing to settle for a "large epsilon", while other people all require a "small" (or possibly zero) epsilon.
I don't think any amount of word smithing can get around this disagreement or make people change their minds about the level of epsilon that seems reasonable to them. In principle, though, I can imagine some hypothetical experiments where we actually copy people with different levels of epsilons, observe the resulting behavior, and that this might actually be able to convince people that a certain epsilon is appropriate.
But we do not demand formal proofs for identity anywhere else.
You're right that we never do formal proofs of identity when there is physical continuity, but we always do formal proofs of identity whenever we have an avatar representing us. For example, whenever you connect to themotte your web browser does a TLS handshake to for themotte/browser to prove their identities to each other.
I seems the earring is closer in spirit to the avatar than physical continuity.
Post-training quantization is often enough to get 8-bit models close to floating-point accuracy.
This is just doesn't even parse.
I am a puny shape rotator. I will therefore respond to your post with some math that rationalists-cum-philosophers are not nearly as familiar with as I would like.
In particular, when you say:
I think that such a high-fidelity model can, in the limit, pass as myself, and is me in most/all of the ways I care about.
I believe that your "in the limit" here is doing more work than I think you intend. (And this makes everything else in your post moot.)
Let's imagine that your consciousness can be fully represented by a set of real numbers $S$. (This, I think, is a premise you would accept.) Now let's imagine we have a physical device that could instantiate another version of $S$ in a different physical system. (In your post, the earring is instantiating your "youness" in the earing itself, effectively copying $S$ from inside you biological body to inside the earring.) The fact that these are real numbers stored in $S$ implies a lot of counter-intuitive results. In particular, in a universe where such a device exists, we can prove that P = NP (and even the stronger but less famous P=PSPACE) and depending on your exact definitions, even uncomputable problems like the halting problem can be solved.
For this reason, philosophically minded mathematicians/computer scientists/physicists basically all reject the idea that that arbitrary copies of physical objects can be created. (Note that this idea is distinct from the no cloning theorem in quantum physics, and everything I said above holds in a purely Newtonian universe. Things obviously get even wonkier when you add quantum effects or relativity to the mix.)
So that this means any possibly real earring (or super-claude-code in earring form) can only ever approximate your $S$ set. And now that we are talking approximations, we need to define a measure of "goodness" of an approximation. But this opens up the can of worms that increasing the "goodness of approximation linearly" generally requires "exponentially more compute" in most physical systems [1]. But where is that exponential compute coming from?!
It also opens up the even deeper problem of verification: If someone (i.e. the earring) claims to be an $\epsilon$-approximate version of yourself, and you are happy that $\epsilon$ is sufficiently small, how can they prove that claim? This claim also, in general, requires exponential compute to verify. So in some sense it is computationally impossible to know whether the earring is actually "evil" or not.
So I reject your premise here. It's not because I am not a functionalist (I am), but because I care about computational complexity.
A decent but highly technical starting point for this style of reasoning is Scott Aaronson's 53-page essay on Why Philosophers Should Care About Computational Complexity or his paper NP complete problems and physical reality.
[1] Here's a giant list of claude-generated examples:
Numerical/Computational:
- N-body simulations: Doubling precision in gravitational simulations requires
4x more particles and16x more compute (O(n²) or O(n log n) with approximations) - Monte Carlo integration: Halving the error requires 4x more samples (error scales as 1/√n)
- Floating point precision: Each additional decimal digit roughly doubles memory and compute cost
Machine Learning:
- Neural scaling laws: GPT-style models show loss decreasing as
C^(-0.05), meaning 10x compute gives12% loss reduction - Training convergence: SGD error decreases as O(1/√t), so halving error needs 4x iterations
Physics Simulations:
- Quantum systems: Exact simulation of n qubits needs 2^n amplitudes - each additional qubit doubles cost
- Fluid dynamics (CFD): Halving mesh spacing in 3D requires 8x more cells and ~16x more compute (due to CFL condition on timesteps)
Signal Processing:
- Shannon-Hartley: Doubling channel capacity requires exponentially more SNR
Cryptography (inverse example):
- Brute force key search: each additional bit of key doubles search space
The pattern: entropy, dimensionality, and error propagation conspire against linear improvement.
The text also doesn't literally ever call Adam the "first" man. It has to be inferred from context that there were no men mentioned before him. To call it merely "tradition" that Adam was the first man and not something the Bible says, however, would be absolutely ridiculous. It is similarly ridiculous to say the Bible does not call Tubal-Cain the first metal worker.
Technically, the tale that Tubal-Cain was the first smith comes from tradition, the Biblical text does not say it.
Yes it does. Genesis 4 lists the descendants of Adam, and then 4:22 states
Zillah also had a son, Tubal-Cain, who forged all kinds of tools out of bronze and iron.
The point is not to reject YEC. The point is to show that @Eetan's joke falls flat because he is not actually pointing our an inconsistency with YEC (even though as you note many exist). If Hegseth had made a joke about star distances, and @Eetan made a joke about star distances violating YEC, then the joke might have landed.
Your joke falls flat because the stone age is fully compatible with young earth creationism. The standard timeframe for the end of the stone age is 2000-4000BC depending on location/definition, and the "youngest" of YECs date the world at ~6500 years old, before the end of the stone age.
I just reread the opening of Genesis, and the first metal tools in the Bible actually appear much earlier than I had thought at only Gen 4:22 with the birth of Tubal-Cain: "Zillah also had a son, Tubal-Cain, who forged all kinds of tools out of bronze and iron." There's 8 generations of stone-age then in genesis: Adam→ Cain → Enoch → Irad→ Mehujael→ Methushael→ Lamech→ Tubal-Cain.
I think gloating is poor form in these cases.
But telling these stories is still important as a warning to others not to repeat them.
I mostly agree. My point is just that elites of all belief systems read. And so saying "my civilizational enemies are all well-read" as @coffee_enjoyer did as a way to associate reading with his out group is nonsensical.
I find your comment nonsensical.
The great cathedrals were built by men who had no concept of literature
You don't consider the bible "literature"?
All of my civilizational enemies are well-read.
The elite of both the red and blue tribe are well read, they just read different things and want to be known for reading different things. OP has good examples of blue-tribe reading, so here's some examples of red-tribe coded reading: the Bible, ancient greek plays, Shakepseare, John Locke, Federalist Papers, Heinlein, Tom Clancy, Little House on the Prairy.
Hahaha! You made my day :)
Yes. I was a naval academy graduate, but am now a pacifist.
A major purpose of the Iran War is formation of combat leaders. I haven't seen anyone else mention this elsewhere. But it is quite explicit within the military officer hierarchy that one of the main reasons we invade for "funsies" is to keep us in practice for when we actually "need" to invade someone for real.
I believe your link goes to a problem from the arc-agi-1 dataset, not the arc-agi-3. The former is basically "solved" at this point.
The callousness is the point, isn't it? No one self-labels as "I want to bomb brown people". It's an accusation against other people that they are callous because "They want to bomb brown people." Suggesting that someone else is callous doesn't strike me as callous.
It's because they wrote a "good prompt" to get the models "thinking" in the "right way". No data leakage here.
The highest scoring AI couldn't complete more that 0.5% [on ARC-AGI-3]
It's worth pointing out that within a day, the AIs had gotten to 36%: https://www.symbolica.ai/blog/arc-agi-3.
In general, though, I agree. My take is that AIs are good at solving leetcode style problems but nothing bigger. The way to be productive with them is to know how to divide up your tasks into leetcode tasks for the AI and the non-leetcode tasks for people.
As soon as streaming became the main business it was over, because bandwidth considerations came into play, similar to the space considerations of Redbox, and it was thus impossible to keep an inventory of that size, especially when the licensing agreements were more complicated and probably required them to pay for rights even for stuff that wasn't in high demand.
It's 100% the licensing agreements that cause this shortage, and not bandwidth.
- Prev
- Next

This just does not match my understanding of recent history at all... so I guess it's useful for me to understand that there are people in the world who view things the way you do :)
More options
Context Copy link