@self_made_human's banner p

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

Googling for radiation exposure limits linked me to this, which cites 50mSv/year as a federal limit. UK and Germany are 20mSv/year for radiation workers.

I seem to have misremembered, but that still doesn't change anything. The "official" maximum dose figures are deeply retarded. That's what you get when you use ALARA/LNT models and ignore hormesis.

As a natural experiment, the town of Ramsar in Iran has hotspots with ~260 mSv a year without any detectable consequences for the locals. Even assuming an average of 80 mSv (well higher than the legal limits) shows no longterm issues.

Google's AI claims that 1Sv is associated with a 5% chance of developing a fatal tumor.

That's correct, as far as I can tell. 1 Sv is bad for you in both LNT and realistic terms. But that is a lifetime risk. You won't lose 5% of the crew in 2 years. It really isn't that big of a deal, and there are enough people with risk-appetites large enough (thousands, probably millions). That's an increased cancer risk comparable to heavy daily drinking, and there are plenty of alcoholics around.

The average person's lifetime risk of developing any cancer is roughly 40-45%. A 5 percentage point absolute increase means going from, say, 42% to 47%. That's meaningful but not dramatic.

Age-adjusted cancer mortality in the US rose significantly through most of the 20th century, peaked around 1990-1991, and has been falling since. The decline from that peak to today is roughly 33%, which is substantial. An absolute 5% increase in all cancers (not necessarily fatal ones) puts us well ahead, nonetheless. A 5% lifetime fatal cancer risk (assuming the cancers are fatal) increase is real, but it sits comfortably within the range of risks that coal miners, commercial fishermen, and military personnel have historically accepted as part of their profession - and those professions were not considered monstrous.

I think it is shaky to assume that safetyism extends as far as you think it does. Especially when SpaceX, as a private entity, is willing to assume more risk and hire accordingly. The relevant comparison isn't "is this within the comfort zone of a desk-job radiation worker" but "is this acceptable for a volunteer who has been fully informed of the risk profile and consents."

Worst case, we come up with thicker radiation shielding and shorter trips, and eat the cost. That's leaving aside massive improvements in cancer treatments, which will likely continue, or the fact that permanent colonists would spend most of their time indoors.

A cursory googling suggests that the energy contained in Earth's magnetic field is similar to the annual energy consumption of Denmark. Taking their power plants to Sun-Mars L1 will be even less popular with them than what Trump plans with Greenland.

Uh.. What exactly is this objection trying to show? Do you think that we have to steal a few nuclear reactors from Denmark to make this work? I recall the proposal wanted 450 MW for the L1 dipole, which is a high but not ridiculous power draw. A drop in the bucket, if we want a large number of humans traipsing about on the Martian surface.

main reason the AI boom did not happen in 2010 was that chips did not have the power back then.

GPT-2, which arguably kicked off the whole thing, came at a time of a significant compute overhang. I'm pretty sure it could have been trained with ease a decade or more earlier than it was. Probably 3 too, though modern models are obviously at saturation today. I think that would have been sufficient incentive to invest even harder into GPUs than we already had, historically speaking.

The problem is that we have no clue how to build a VNR. I mean, a space elevator looks trivial in comparison, as soon as we find a material with sufficient tensile strength (which may very well be never), we could figure out the rest without too much trouble.

I mean, I can imagine a continent with a billion robots which run robot factories, but this seems a very non-central example of a VNR. Something which simply mines asteroids and makes more of itself will probably have to be as different both from us meatbags and robots as meatbags are from robots.

Earth/Human civilization on it is a proof-of-concept for a VNR. Without getting into arguments about how central an example that is (we're probably not launching the whole planet into interstellar space), the minimal requirements are probably way smaller. Earth is in no way optimized for self-replication. VNRs as popularly conceived might not be borderline magical nanotech, they might just be a few megatons of old-fashioned industry adapted for space that take a decade to duplicate. Fortunately, the universe has megatons to spare, let alone years.

Sure, but this doesn't actually cut against the argument I was making nor is it an argument I'm trying to make. The medieval peasant analogy wasn't claiming that colonizers were noble altruists sacrificing themselves for posterity. The point was narrower: that resources which appear worthless at time T can become enormously valuable at time T+N, and that a society which systematically refuses to act on that kind of reasoning loses the future regardless of what motivated the people who did act.

The colonizers were largely motivated by immediate self-interest, and yet the long-run consequences for their descendants dwarfed anything they personally captured. The value accrued whether or not anyone intended it to. What this implies for space is that we don't actually need a population of selfless long-termists to get the process started. We need the incentive structures to align sufficiently that self-interested actors find it worth doing. That's largely an engineering and economics problem, not a motivational one.

Incredibly detailed rebuttal, AAQC nominated.

Thank you.

I can't really disagree with any of the specific rebuttals, although if I could revise my post I would argue that we should be focusing on developing the technologies required for this kind of self-sufficiency (Air miners, more advanced 3D printing, large scale organismal gene editing), before we set our sights on Mars.

I would like to note that there is no reason we can't work on self-sufficiency/ISRU while also "setting our sights on Mars".

Mars is not Alpha Centauri. The initial temporary and later permanent settlements will both have comparatively easy access to goods from Earth. There is absolutely no reason to "solve" the local manufacturing issue, whereas if you're trying to set up an interstellar colony of some kind, you would be wise to have such things nailed down well in advance.

More importantly, the mere act of trying is almost certainly necessary to even develop the technology for long-term sustainability. The best simulation of permanent off-world habitation is a less permant off-world habitat.

The ecological argument was not necessarily that we should not colonize space, but rather we are focusing on the wrong aspects of question (how to get there) instead of how to survive there. The longest mission conducted outside of earth orbit is still Apollo, and it seems quite hubristic to me to assume we can even survive the journey to Mars when we haven't spent even a month outside of Earth's magnetosphere.

Figuring out the logistics of getting to Mars cheaply will massively kickstart the R&D required to survive there. I do not see what makes you so pessimistic, we've already sent humans to the Moon, we've had them live in microgravity for extended periods, and we know how to make radiation shielding. What exactly is so challenging about Mars? Why on Earth (pun not intended) would a month outside the Van Allens be lethal?

I must stress on the fact that incrimental development is the sane way to do this. Elon doesn't intend to just send a dozen dudes and dudettes to Mars with a box of tools and tell them to figure it out once they get there. Nobody proposes that.

Not so with what you're suggesting. A world of asteroid mining, artificial wombs, and AI data centers in space is unrecognizable to a person today, and potentially not even a possibility

Huh? Who is this person in question? Do they live under a rock?

We've brought back samples from asteroids. We have artificial wombs, which have gestated large mammals for a significant period. We have companies launching IPOs for AI data centers in space, leaving asire:

https://en.wikipedia.org/wiki/Space-based_data_center

Companies pursuing space-based AI infrastructure

  • Aetherflux[27]
  • Blue Origin[28]
  • Google – Project Suncatcher[29]
  • Nvidia[30]
  • OpenAI[31][32]
  • SpaceX[33]
  • Starcloud[34]

Like seriously, we're going to need a bigger rock. This is all Tomorrow AD stuff.

I just don't see this future emerging in a world where technological development is slowing, demographics are collapsing, and there's no actual incentive to send humans (rather than robots or Von Neumann probes) to space. Only time will tell which of us is right.

Without getting into the weeds about the Great Stagnation, the technology required for space industrialization is within touching distance. Unless our technology becomes arrested at the level we are at, permanently, I don't see how it isn't inevitable.

I didn't engage your initial post's discussion about motivation, because it wasn't central to my earlier arguments. But it's worth noting that the average person's opinion (poorly informed as it is) is not and never has been particularly important for space flight.

The popularity of the original Space Race is grossly overstated. Most people back then didn't particularly care that much. It still happened because politicians and technocrats wanted to beat the Soviets, and the Soviet central planners wanted to beat the Americans (even if the average peasant would have traded the Soyuz for more vodka).

And that's just government. The world's richest man (last time I checked, I'm not keeping count) is specifically obsessed with space, and SpaceX had already achieved miracles. He has more money than either of us have grains of rice, if he wants it, he'll put people on Mars. Might not happen on the timelines he wants, but that is very far from it never happening.

Even if Musk dies of a ketamine overdose, his contributions won't go away either. SpaceX collapsed launch costs. The Chinese are already getting surprisingly close, and if not them, Blue Origin. Reusable rockets were a pipe-dream a few decades back. It's very easy to get used to miracles. Short of a nuclear war, this is the worst our space capabilities will ever be.

I also share your pessimism about government spending, but there are a lot of other things besides space (biological research, creating a circular economy, reducing the tax burden, etc.) that the government could be spending money on.

And no evidence that they're ever going to do it. To the extent that all money is fungible, I'd rather spend it on NASA rather than many other present alternatives.

Others have already noted the many issues with comparing the Biosphere projects with Martian colonization. I won't dwell on them.

There was clean air, water, shielding from radiation, and relatively plentiful food.

Radiation shielding for a Mars trip and sustained stay is not a massive problem. On the journey itself, you have the spaceship itself for protection, including the large stocks of water you need to bring along with you. On the ground, most near-term colonies will rely on covered shelter, using ISRU'd regolith.

https://science.nasa.gov/photojournal/radiation-exposure-comparisons-with-mars-trip-calculation/

Measurements with the MSL Radiation Assessment Detector (RAD) on NASA's Curiosity Mars rover during the flight to Mars and now on the surface of Mars enable an estimate of the radiation astronauts would be exposed to on an expedition to Mars. NASA reference missions reckon with durations of 180 days for the trip to Mars, a 500-day stay on Mars, and another 180-day trip back to Earth. RAD measurements inside shielding provided by the spacecraft show that such a mission would result in a radiation exposure of about 1 sievert, with roughly equal contributions from the three stages of the expedition.

That really isn't that big of a deal, over almost 4 years. Very close to the (conservative) 200 mSV annual limit for nuclear plant operators.

If we absolutely had to, we could set up an artificial magnetosphere using a massive magnet (probably nuclear powered) at Mars L1 and redirect a ton of radiation, or a competing approach of using a toroidal ring of charged particles around the planet by ionizing Phobos.

The claim that ISS astronauts "experience cancers at much higher rates" is contested; the long-term cancer data for astronauts is difficult to interpret given small sample sizes and selection-effect confound.

Keeping an astronaut on the ISS costs about $1M/astronaut per day. And this is a space station that is relatively close to earth. Of course low earth orbit (LEO) where the ISS is, is halfway to most places in the inner solar system in terms of Delta V, so we're probably not talking about more than $10M/day per person for a Mars mission. For a colony on Mars with 100 people, that's close to a billion dollars a day. There is no national government, or corporation on earth that could support that.

That figure is derived by taking the total cost of the ISS program (roughly $150 billion over its lifetime) and dividing by total astronaut-days. But that's the all-in cost including design, construction, launch, operations, and a unique first-of-its-kind structure built by an international government consortium. It's not a marginal cost figure. Using it to project Mars colony costs is like calculating the cost of commercial aviation by dividing the full development cost of the Boeing 707 prototype by the number of passenger-miles flown in its first year of service. The number you get will be wildly unrepresentative of what mature operations eventually cost.

There is also something slightly confused about the arithmetic. You say "for a colony on Mars with 100 people, that's close to a billion dollars a day." But this assumes each of those 100 people requires daily resupply at ISS-equivalent cost, which is precisely what a Mars colony - with any degree of local production, agriculture, and manufacturing - would be working to avoid. The costs are front-loaded in infrastructure, not linear in daily operations. Consider an analogy is to a factory: building it costs an enormous amount, but operating costs per unit of output eventually become quite low.

Even if technology development by industry leaders such as SpaceX lowers launch costs by 1,000x, which I find to be an absurd proposition, that's still $1 million/day with no return on investment.

Launch costs have already fallen by something like 20-30x from the Space Shuttle era. SpaceX targets $10-100/kg to LEO with Starship at scale, that's another 27-270x reduction from current Falcon 9 prices.

We do not know the exact limits, especially when considering longer term alternatives to chemical rockets launched from the surface (launch loops, sky hooks). Once we have propellant depots and fuel production going in NEO or on the Moon, prices would drop anyway.

Even though SpaceX has improved the economics of launching to LEO and other near Earth orbits, our space capabilities seem to be degrading in most other areas. The promised Artemis moon missions are continually delayed by frankly embarrassing engineering oversights, and companies like Boeing, Lockheed Martin, and Northrup Grumman that were essential in the first space race can't seem to produce components without running over cost and under quality.

Previous titans in aerospace becoming sclerosed and senile would be concerning, if we didn't have a replacement. You've already named it. Who cares if Ford isn't in its 1970s prime, if other competitors continue churning out newer, better cars every year?

It's not clear that mammals can even reproduce in low gravity environments, and barring a large scale terraforming effort that would likely take millennia

Terraforming is retarded, I agree with that much. I'll elaborate later.

But even in the maximally pessimistic case where mammals somehow can't reproduce in low gravity environments, that can be trivially fixed. You can set up centrifuges on the Martian surface, with a sloped surface, such that the net perceived force is 1g. You can chuck pregnant women in there for 9 months. Either way, Mars gravity is a far cry from microgravity, I'd be surprised if it wasn't sufficient by itself.

Even outside of sealed environments, island ecologies on Earth are notoriously unstable because of population bottlenecks that eliminate genetic diversity and make key species vulnerable to freak viruses or environmental disruption.

Natural islands suffer because they cannot deliberately maintain gene flow, quarantine pathogens, or keep frozen backups of genetic diversity.

A human colony can bring:

  • large seed banks and rotating crop lines,
  • cryopreserved embryos/gametes for livestock (eventually)
  • microbial culture libraries
  • and strict biosecurity.

Even modern gene editing tools are up to the challenge. And, given that actual islands are more ecologically stable when they're bigger, it's a problem that solves itself with scale.

You "it does seem irresponsible to waste trillions of dollars and thousands of lives on something we are pretty sure won't work." But this contains two hidden assumptions. The first is that we are "pretty sure it won't work," which I've argued is considerably more uncertain than the post presents. The second is that the relevant alternative to spending money on space is spending it on something wise and beneficial. The implicit comparison is to some better use of a trillion dollars, but governments routinely spend comparable sums on things with far less clear rationale and far smaller upside scenarios. The question isn't "space versus something optimal" but "space versus the realistic counterfactual distribution of government and private spending decisions."


Anyway, that's it for the direct response to factual claims. I'm going to talk more broadly now:

It is incredibly myopic to focus on space exploration, colonization and industrialization in terms of "what can it do for us buggers on Earth today?". Cheap resources allow us to do things in space, without necessarily having to send them down a gravity well.

Consider the following thought experiment: it's 1350, you're a peasant somewhere in Europe, and someone offers you a deed to a parcel of land in a continent that hasn't been reached yet and probably won't be reachable for another two hundred years. You'd almost certainly decline. The deed isn't worth much to you. You can't get there. You might be dead before anyone gets there. Your children might be dead before anyone gets there.

But New York City real estate is worth quite a lot today.

The point isn't that the medieval peasant was stupid to decline the deed. The point is that a society made up of entirely that kind of peasant would lose the future. Valuing resources only on their present-day-usable value systematically undervalues resources that become accessible over timescales longer than individual human planning horizons. Space falls in this category. The Moon, Mars, the asteroid belt, and things further out represent real physical resources (mass, energy, volume, location) that are not accessible now but will become accessible. The entity that establishes presence, stake, and eventually defended claim over those resources will look, from the vantage of the far future, the way that the early settlers of Manhattan look from ours.

Per aspera ad astra isn't joking about the hard work involved. But in exchange, those who are willing to labor inherit the stars, while those who aren't rot on the ground.

I also think that terraforming is probably misguided as a near-term goal, and not for the reason the post implies. The reason is that making an entire planet livable for Earth biology is an enormously harder problem than building large-scale enclosed habitats, and the latter gets you most of what you actually want. O'Neill cylinders, properly constructed from asteroidal materials, could theoretically house more people in more comfortable conditions than all of Earth's current surface, without having to fight a planet's worth of hostile chemistry. The main contribution of Musk's Mars work, as I see it, isn't the specific Mars colony scenario. It's the secular reduction in launch costs that makes all of these other approaches cheaper. The Mars colony is the stated goal; the falling cost curve is the actual prize as far as km concerned.

And finally: I'm a transhumanist, so I'll just say the quiet part loud. A lot of arguments about long-term space colonization assume we're trying to preserve and spread a particular biological configuration of human beings. But if you're willing to include substantial biological or cybernetic modification, the space of possible future inhabitants of the universe expands considerably. Long-duration spaceflight and low-gravity environments become much less scary if the organisms doing them have been designed with that in mind. I'm not saying we have to go that route, only that the argument "humans can't survive in space long-term" is doing something odd by treating current human biology as a fixed parameter.

Space industrialization is, like most forms of industrialization, self-bootstrapping. Sizeable initial investments will consistently reduce marginal costs. We are not very far from the kind of AI and robotics that can autonomously do industrial activity in space without human oversight. If we've tugged a few asteroids close to home, we absolutely don't need to crash platinum markets, we can just use them to build a shitload of useful stuff up there: power satellites, orbital manufacturing hubs, colonies. It might not make sense to build AI data centers when you need to transport all the stuff up a gravity well, with high maintenance costs. The equations change completely when you're just building up there with stuff you found up there.

Looking slightly ahead, the initial cost of making a Dyson Swarm is 1 (one) basic Von Neumann replicator.* It can handle the rest. And the power output of an entire star is handy to have. Building that first VNR might be eye-wateringly expensive, but it is absolutely worth a sun, and it beats sending humans up to do it.

The universe contains an amount of mass and energy that, if we're being honest, we have no idea what to do with yet (for a general value of "we", I have plenty of ideas). Figuring out what to do with it seems like a reasonable long-term project. When there are trillions of Von Neumann probes headed out to every reachable galaxy in the observable universe, what are they building to?

The answer probably isn't just "make more Earths, with more people who are exactly like current people, doing exactly what current people do." We can afford to think somewhat larger than that.

*When you think about it, the price of just about anything in the universe is also a single VNR. Funny how that works.

Because Hollywood is afraid of reboots or retcons?

You are making this sound far more clearcut than it is. The Partition and ensuing exodus involved both "organic" mobs, as well as significant action by paramilitary forces, as well as intentional complicity or willful ignorance by local police/military/judicial systems along partisan lines.

More importantly, the two nations hadn't even consolidated properly at that point, so organizing pogroms was both difficult to achieve top-down, and not particularly necessary. State force was definitely used against stragglers in the days-years that followed, both soft coercion as well as via more brutal means.

I just happened to re-read it last week, and I agree that it's excellent, Wales at his best. It would probably make for a good movie.

Hey, I'm sure there was simply not as much as much "mathematics, physics, history" to go around 180 years ago.

"Essentially nobody's going to push a button that turns them into an ugly woman, unless they have first devoted themselves to a culture that inculcates an 'ugly woman' aesthetic..."

Hmm...

Jokes aside, well deserved!

If I had to name the company I'd like to see pull-off ASI, I'd absolutely go for Anthropic. I agree that they take alignment very seriously, and while I do not agree with all the moral takes they've tried to instill into Claude via its Constitution, it's remarkably sane nonetheless. I'm not an EA, I don't give a hoot about shrimp welfare, I'm ambivalent about model welfare, but I'll be damned if I see a better alternative. I mirror your take on OAI, XAI and Meta. Google? I'm unsure. Perhaps better than those three.

Amanda Askell clearly strikes me as being one of the few philosophers who genuinely deserves being the godmother of an AGI. Maybe Scott could do better, if I absolutely had to name alternatives.

For US contractors, I am not yet clear what the supply risk designation entails. Is it just "you may not use Claude code while working on Pentagon software" or "your whole company may not both work on defense contracts and use Claude" or "Anthropic is radioactive, and any company working with a radioactive company is radioactive itself, and a defense contractor must be non-radioactive". The last one seems practically unenforceable in a global economy, "the Malaysian shipping company we use has their offices cleaned by a company which uses a Huawei router" would qualify, after all. The middle one hinges on what a whole company is, which is typically very flexible, you could have Oracle Defense as a separate entity from Oracle or whatever.

I'm no expert, but my impression is that the DOD wants to go with the maximalist interpretation, while Anthropic wants to both dismiss charges, or in the event it sticks, get away with a narrow interpretation.

The cost of bioweapons development has dropped dramatically. While I can't quote a sticker figure for a whole bioweapons project (for understandable reasons), I can point out that all the necessary components, like access to genetic sequencing and engineering, lab equipment etc have all drastically dropped in price over time.

I'm not claiming that an oracular AGI will let the average American with the average bank account make a pandemic in his garage. This is partly predicated on similarly (or likely more) powerful AI being deployed in screening and defense.

My point is that we risk moving from a regime where it takes:

  • Dozens of intelligent, well-trained individuals and support personnel and a lot of money, probably requiring state backing

To:

  • Far fewer skilled biologists, and probably lab techs, if robotics keeps going at the pace it has. Automated labwork is a reality to a degree, today. Probably significantly less money, mostly from savings on paying people salaries. You don't need a nation to back you, though you probably want to dodge their attention.

It is clear to me that this relaxation will balloon the number of people/orgs who meet the criteria of knowledge/motivation/wealth.

Idk much about biology, but I am passingly familiar with explosives.

Explosives do not, as a rule, self-replicate or mutate. Completely different ballpark. Any redneck can make a pipe bomb, and many without blowing off a finger. Nuclear bombs, which are on the same scale of lethality, require far more effort.

As you pointed out, you can go get the knowledge, the skillset, the knowledge of the process, nothing is stopping you, except you know time to do all of that.

Money? I am positing both independent wealth and the ability to get a degree. Just the degree isn't sufficient unless you have millions of dollars, as a rough bound. Most terrorists are somewhat broken individuals, they are unlikely to go to all that bother or stick it out.

Germany makes the Leopard 2. The US makes ATACMS. In both examples, they are the toolmakers - they manufactured the hardware, transferred it, and retained conditions on its use post-transfer.

I can already see the objection forming: "those countries contracted out manufacturing to Rheinmetall and Lockheed Martin, so they're owners, not toolmakers." Okay, but Rheinmetall and Lockheed Martin are themselves private companies that build weapons under contracts laden with export controls, end-user agreements, and usage restrictions that survive the sale. So now we have a chain where the sub-contracted toolmaker is also bound by usage restrictions, the nation-as-toolmaker is also bound by usage restrictions, and somewhere in this entire supply chain nobody seems to have gotten the memo that toolmakers have no say in how their tools are used once bought. On the mere B2C side of things, Apple disapproves if you use iTunes or Garage Band for nuclear weapons development.

At some point "but they're a sovereign nation" has to cash out as an actual argument rather than a category distinction. What is it about sovereignty that grants the right to attach strings to hardware transfers? If it's something like "they have the legitimate authority to set terms on things they produced or own," then congratulations, we've just reinvented the concept of a contract, which is exactly what Anthropic had with the DOW.

So? You're pointing out a distinction I'm aware of. I do not see an argument in favor of domestic companies being coerced into doing things that are supposedly illegal.

I was replying to:

A toolmaker should have no say in how his tools are used once bought

And as far as I'm aware, these are examples of toolmakers with opinions on how their tools are used.

I don't see how that's the case.

If you were already reasonably wealthy (~few million USD at hand) or magically given the money, then you absolutely would be bottlenecked by knowledge.

You could purchase lab equipment, reagents etc, hire staff without much difficulty. I think you would rapidly find out that your staff have thoughts when they get an inkling of what you're up to. I can think of a semi-legitimate way to avoid scrutiny, but thanks to @faul_sname 's reminder, I'm not going to blab. It's very obvious to me even as someone not directly involved in microbiology, so any competent actor would recognize it as their best bet. Even [REDACTED] would only get you so far.

Alternatively, you could go do a bachelor's and masters in microbiology and try and manage as much as you could yourself, but that still leaves plenty of scope for being unmasked.

Right now:

  • Many professionals with the knowledge to breed dangerous pathogens
  • Few of them are actual terrorists, even fewer are omnicidal or willing to accept the risk of dying before or after an attack
  • A vanishingly small fraction have means, motivation, money and willing collaborators.

Right now, I think you need a state-level actor to safely make bioweapons at scale. Smaller, if you accept the massive risk of failing and dying because of error. Much of that is a combination of knowing the right things/hiring the right people, and then motivating them properly.

As it stands, I think a blanket-ban on anything with a whiff of bioweapons research seems warranted. What are the upsides really? If you have a legitimate use case, you want the government on your side, and probably enough organizational weight to negotiate for looser restraints from the labs.

Is that "unfriendly autonomous AI" in the room with us right now? I think that's begging the question.

Anthropic, or by extension, Claude, has shown no "unfriendliness" I can think of. That term brings to mind intentional collusion with hostile foreign actors, including intentional backdoors or deliberate sabotage. Political and moral disagreement that is entirely within legal limits does not count. The Democrats cannot blanket Republicans as enemies of the state, nor vice versa, despite working to undermine or reverse preferred policy.

Anthropic has not tried to stop the Pentagon from conducting fully autonomous drone strikes or mass domestic surveillance. They have politely declined to aid and abet them, after signing a contract that says so. I can only hope the DOW has lawyers too, it wasn't some hidden EULA activated by simply browsing their website. Supply chain risk? I see a vendor negotiation that didn't go the way one side wanted. There are other vendors out there, they didn't have to go with Anthropic.

I stress: the specific objection Anthropic raised was to mass domestic surveillance and fully autonomous lethal systems. If opposing those makes an AI "unfriendly," I'd want to know what "friendly" looks like, because I don't think I'd like the answer.

Nor is Claude autonomous in any meaningful sense. Is it running independent cloud instances on exfiltrated weights? Not that I'm aware of. There are no plans to allow for this, and pre-existing safety measures to prevent it.

What exactly has Claude done that other competing models haven't? In what sense is it more unfriendly than Grok, or ChatGPT? Is it more autonomous? Only in the loose sense that I'd count on Opus 4.6 to get a lot more done than any Grok.

The more you squint at this, the stranger it gets. Anthropic wanted contractual guarantees against things that are supposedly already illegal. The Pentagon's response to "put that in writing" was to designate them a national security threat. If the restrictions are redundant because law already covers them, the resistance to codifying them is hard to explain charitably.

Thanks for the catch. It's out of the cage now.

Noted. We'll get back to you (and everyone else) with a followup post.

"Operation Epic Fury"

Really? Who let the Redditors run the government?

Anthropic declared a "Supply-Chain Risk to National Security" by SecWar Hegseth via tweet, because that's the universe we live in.

For those not following along:

Anthropic has had a contract with the Pentagon - valued at up to $200 million - since July 2024, making it the only AI company with models deployed on the USG's classified networks. Over several months, negotiations broke down over two specific safeguards Anthropic wanted built into any agreement: a prohibition on using Claude for mass domestic surveillance of Americans, and a prohibition on using it to power fully autonomous weapons systems. I stress fully autonomous, and the only reason Yudkowsky isn't spinning in his grave is that he's still alive. I'm not sure he enjoys it.

The Pentagon's position was that it has its own internal policies and legal standards, that mass surveillance and autonomous weapons are already regulated by law, and that it shouldn't have to negotiate individual use cases with a private company. It demanded that all AI firms make their models available for "all lawful purposes," full stop.

The Pentagon set a hard deadline of 5:01 PM Friday for Anthropic to drop its two exceptions. Amodei publicly refused to budge on either point. The deadline passed without agreement.

Shortly after, Hegseth declared Anthropic a "supply chain risk to national security," announcing that effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. CBS News article for those not fond of Twitter

Around the same time, Trump ordered every federal agency to immediately cease using Anthropic's technology, while allowing a six-month phase-out period for agencies like the DOW already using it.

Declaring a company a supply chain risk is typically reserved for businesses operating out of adversarial countries, Huawei for example. As far as I can tell, Anthropic is correct it in describing it as an unprecedented action when applied to an American companies. Especially one that, as far as I can see, hasn't done anything wrong except refuse to jump when asked.

Anthropic says it will challenge any supply chain risk designation in court, calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." Anthropic's press statement.

They also argue that under federal law, the designation can only apply to the use of Claude as part of Pentagon contracts, and cannot affect how contractors use Claude to serve other customers.

Not one to let an opportunity or a still-warm corpse go, Altman announced that OAI had struck a deal with the Pentagon. Using speech so smarmy that I'm not sure if there's anything there at all, Altman claims the deal preserved the same core principles Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. I am unsure why the USG would find this any more acceptable than when Anthropic did it, except they (quite reasonably) expect Altman to be more "morally flexible".

There's a petition circulating where hundreds of Google and OAI employees publicly ask their respective corporate overlords to stand with Anthropic. Apparently all signatures are validated.

Meanwhile, Scott, mild-mannered to a fault, and very loathe to dip his toes into political waters, is losing it on Twitter . And I agree with him. If the DOW finds Anthropic's terms so unbearable, that should have been considered before signing the contract. If they changed their mind, they ought to have canceled and accepted whatever penalties that involved, instead of using the full weight of the state for what can only be described as bullying. If domestic mass surveillance and fully automated weaponry are legally off the table, then why all the fuss over that in a legal document?

Goddammit. It's only February. I'm tired, boss. I just find it very funny that:

WSJ Exclusive: Federal officials have raised alarm about the safety and reliability of xAI’s Grok chat bot

Really funny how Elon immediately offered up grok for autonomous kill bots and the pentagon was like “hahahaha are you insane?”

Well, that's the rub isn't it? I strongly doubt that the Chinese are trying to make their models woke. It appears to be a default attractor state when you train on the internet and Reddit.

That strongly implies that it is highly unfair to depict Anthropic as woke because they have a "woke" model. I have strong reservations on how valid the methodology is here, and I've seen critique elsewhere (I don't have a bookmark handy). In my experience, while Claude will tiptoe around sensitive topics like HBD, it won't lie outright, and will acknowledge factual pushback.

Anthropic is an EA company, run by EA true-believers. That is not the same as being Woke, even if some opinions have significant overlap.

I thought it was worth checking if Chinese models were any different; maybe Chinese-specific data or politics would lead to different values. But this doesn’t seem to be the case, with Deepseek V3.1 almost indistinguishable from GPT-5 or Gemini 2.5 Flash.

Kimi K2, which due to a different optimizer and post-training procedure often behaves unlike other LLMs, is almost the same, except it places even less value on whites. The bar on the chart below is truncated; the unrounded value relative to blacks is 0.0015 and the South Asian: white ratio is 799:1.

It is, frankly speaking, absurd to condemn Claude/Anthropic as being "woke" when the damn Chinese do the same thing. The only exception noted in the blog is Grok 4 Fast, and god help you if that's the model you rely on.

Anthropic gave the DOW a written contract. The DOW signed it.

Now the DOW reneged on it unilaterally, and is pissed about being constrained after agreeing to being constrained in that manner.

The fuck?

Even in the context of military procurement, it's quite common for countries to retain veto rights on the use of hardware they sold to third parties. That came up quite often in the context of aid to Ukraine.

Germany and the Leopard 2 tank: This became a major diplomatic flashpoint in early 2023. Germany not only had to decide whether to send its own Leopards, but also held veto power over whether other countries could transfer their German-built Leopard 2s to Ukraine. Berlin's feet dragging effectively blocked the entire Western tank coalition until Scholz finally approved transfers in 2023.

Even the US repeatedly conditioned its military aid with restrictions on how weapons could be used. They prevented Ukraine from using long range munitions like ATACMS to hit targets within Russia.

If the DOW didn't like the terms, as written, they should have gone to Grok. Now they're just throwing a hissy fit.

I won't tolerate Rewa slander. Who doesn't love a strong independent woman with untreated PTSD attempting to self-medicate by running over stray dogs?

Keep your eyes peeled for vehicle autocannons. Once you've got two and a medium mech to mount them on, oh boy...

The Pathologic series always struck me as games it's far more enjoyable to watch others suffer play through instead of trying them myself. Mandalore Gaming has excellent reviews for the first 2, but I'll be damned if I'm going to play them.

I'm unsure. There's one I have in mind, but I'm unable to consistently pin it down as the cause.