@toadworrier's banner p

toadworrier


				

				

				
0 followers   follows 0 users  
joined 2022 September 12 04:23:06 UTC

				

User ID: 1151

toadworrier


				
				
				

				
0 followers   follows 0 users   joined 2022 September 12 04:23:06 UTC

					

No bio...


					

User ID: 1151

I have a short Substack post about AI regulation which is itself a teaser for my much longer article in Areo Magazine about AI risk & policy.

When Chat GPT 3.5 was released, I was tribally. But within weeks my emotions switched tribes, even though my actual rational opinions have been more or less consistent. Basically

  • We have almost no real-world understanding of AI-alignment, we need open, visibile experimentation to get it.

  • There really is risk, AI-development needs legal limits,

  • Those limits should be more about rule-of-law than administrative power.

  • The goal is to create and delimit rights to work on AI safely.

But do read the actual articles to unpack that.

That's what I want, but what I'm afraid we'll get (with the backing of the AI-risk community) is a worst-of-both-worlds. Large unaccountable entities (i.e. governments and approved corporations) will develop very powerful Orwellian AIs, while squelching the open development that could (a) help us actually understand how to do AI safety, and (b) use AI in anti-Orwellian tools like personal bullshit detectors.

I understand the argument that crushing open development is Good Actually because every experiment could be the one that goes FOOM. But this Yuddist foomer-doomerism is based on an implausible model of what intelligence is. As I say in Areo (after my editor made it more polite):

Their view of intelligence as monomaniacal goal-seeking leads the rationalists to frame AI alignment as a research problem that could only be solved by figuring out how to programme the right goals into super-smart machines. But in truth, the only way to align the values of super-smart machines to human interests is to tinker with and improve stupider machines.

Any smart problem-solver must choose a course of action from a vast array of possible choices. To make that choice, the intelligence must be guided by pre-intellectual value judgements about which actions are even worth considering. A true-blue paperclip maximiser would be too fascinated by paperclips to win a war against the humans who were unplugging its power cord.

But even if you did believe in foom-doom, centralising development will not help. You are just re-inventing the Wuhan Institute for Virology.

10

I have a subsblog. And my [first post][mm] is against those who say there's "no such thing as progress"

https://www.amphobian.info/p/marys-motte-and-the-case-against.

I'm basing this off Mary Harrington's recent podcast with Bret Weinstein. But more likely I'm picking a fight with some y'all here, so I hope you enjoy it.

It is one thing when someone is merely wrong. But when someone denies what is starkly before everyone's eyes, then bullshit is in the air. And that is what I smell whenever I hear the dogma that "there is no such thing as progress".

I these dogmatists of of a motte-and-bailey trick

... progress-skeptics retreat back to the safety of Mary's Motte and acknowledge the growth of knowledge, productivity social complexity and human health but deny that this is called progress.

Their motte is a Reasonable But Wrong claim that these sorts of growth aren't morally valuable. Their bailey extends to denying history and also accusing optimists of teleological magical thinking. But really progress has a simple cause: useful knowledge increases.

Civilised humans took millennia to discover writing, bronze and electricity. But we have not since undiscovered them. Useful knowledge is easier to retain than win and easier to win than destroy. On the scale of history, it is quickly disseminated, replicated and used. It gets encoded redundantly in books, technologies, social practices and the genes of domesticated species. Every generation inherits a vast and waxing store of ancestral knowledge both explicit and tacit.