@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
2 followers   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
2 followers   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

The best policy is probably to a) maximally leverage domestic talent, b) allow foreign STEM students, but require that they be selected purely on the basis of academic merit and set up incentives such that almost all of them stay after graduating, and c) issue work visas on the basis of actual talent (offered salary is a close enough proxy).

That's not what we've been doing, of course. We have, instead, been deliberately sandbagging domestic talent, allowing universities to admit academically unimpressive foreigners as a source of cash, letting or sometimes forcing actually impressive foreign students to return home after graduation, and dealing out H-1B visas through a lottery for which an entry-level IT guy can qualify.

Against that backdrop, there's probably quite a lot of room to kick out foreign students and still produce a net improvement by eliminating affirmative action and tweaking the rules on H-1B and O-1 visas.

Microprocessors, RAM, flash memory, cameras, digital radios, accelerometers, batteries, GPS... a small drone is basically just a smartphone + some brushless motors and a plastic body. You even need the display tech, it just moves to the control device.

A larger drone or another type of killbot might require more — jet engines or advanced robotics tech or whatever — but it will still require pretty much everything in the smartphone tree.

If brain-computer interfaces reach the point where they can drop people into totally convincing virtual worlds, approximately everyone will have one a decade or two later, and sweeping societal change will likely result. For most purposes, this tech is a cheat code to post-scarcity. You’ll be able to experience anything at trivial material cost. Even many things that are inherently rivalrous in base reality, like prime real estate or access to maximally-attractive sexual partners, will be effectively limitless.

Maybe this is all a really bad idea, but nothing about the modern world suggests to me we’ll be wise enough to walk away.

The best UI for an AI agent is likely to be a well-documented public API, which in theory will allow for much more flexibility in terms of how users interact with software. In the long run, the model could look something like your AI agent generating custom interface on the fly, to your specifications, tailored for whatever you're doing at the moment. Could be a much better situation for power users than the current trend toward designing UI by A/B testing what will get users to click a particular button 3% more often.

There are no planets we’ve ever found that can likely support human habitation without terraforming. Certainly nowhere else in the solar system would support human habitation without terraforming, which mostly involves hypothetical technology and would take thousands of years, just to end up with a worse version of what we already have.

This is true, but the implication isn't that we can't conquer space, just that we should assume we'll have to mostly build our own habitable volumes. There's enough matter and energy in the solar system to support at least hundreds of billions of humans this way, in the long run.

So Musk might be a little off-target with his focus on Mars. Still, at this point we don't really need to make that decision; SpaceX is working on general capabilities that apply to either approach. And maybe it's not a bad idea to start with Mars and work our way around to habitats as AI advances make highly automated in-space resource extraction and construction more viable.

What’s more, a multiplanetary species would likely still be at risk of pandemics / MAD / extinction-risk events. Sure, an asteroid can’t destroy us, but most other extinction scenarios would still be viable.

Many forms of x-risk would be substantially mitigated if civilization were spread over millions of space habitats. These could be isolated to limit the spread of a pandemic. Nuclear exchanges wouldn't affect third-parties by default, and nukes are in several ways less powerful and easier to defend against in space. Dispersal across the solar system might even help against an unfriendly ASI, by providing enough time for those furthest from its point of emergence to try their luck at rushing a friendly ASI to defend them (assuming they know how to build ASI but were previously refraining for safety).