@OliveTapenade's banner p

OliveTapenade


				

				

				
0 followers   follows 0 users  
joined 2022 October 24 22:33:41 UTC

				

User ID: 1729

OliveTapenade


				
				
				

				
0 followers   follows 0 users   joined 2022 October 24 22:33:41 UTC

					

No bio...


					

User ID: 1729

AI improves that. If your drones can't be jammed because they're autonomous and can find targets on their own, that's a critical military advantage. If your radar software gets optimized by some black-box AI to counter whatever arcane modification the enemy made to their jamming software, that's a major military advantage. Optimization of complex systems in unintuitive domains is a strongsuit of AI. See AI-designed computer chips, Google has been doing that for a while. Modern AI systems are also useful for controlling high energy plasma in fusion reactor chambers, predicting the weather (obvious military and economic significance) and countless other complex domains. Cyberwarfare is another obvious domain where AI is relevant: spear-phishing, reconnaissance, actual infiltrations...

Let me ask a practical question. That's a lot of if statements you made there.

Has AI actually done any of those things? The specific examples you give of things that already exist are mostly speculative - all I can find about AI-designed computer chips, for instance, are hype stories in pop science magazines, rather than anything credible, and even they include the note that most of the AI designs did not work.

In general I am skeptical of the argument that goes, "I can tell it's valuable and useful because people are paying billions for it!" In a sense that proves that it's 'valuable', insofar as you can define value in terms of what people are willing to pay for, but none of that proves that it's useful. People are willing to pay vast amounts of money for obviously worthless things on a regular basis - NFTs are one infamous example.

I can concede a handful of highly technical niche applications - protein folding, plasma confinement, etc. - though even there I'm a little cautious. (I don't understand those technical fields, but in fields that I do understand, where AI is being hailed as a major breakthrough, the breakthroughs once analysed turn out to be, at best, heavily overrated.) But the AI-believer position, in cases like this, are that AI is literally going to make labour obsolete, or that AI is going to become superintelligent, achieve god-like power, and either usher us all to utopia or to utter destruction. And that's a position that is so far in excess of any reasonable estimation of what this technology does that I have to raise my eyebrows. Or yell at a blog post on the internet, I suppose.

I think what he's saying is that techno-futurism is not perceived as a religion because techno-futurists do not make metaphysical or fundamental claims.

Personally I think this is mainly a semantic difference. It's not clear to me that there's a difference between "X is not perceived as a religion because X does not do these things typical of religion" and "X is not a religion". Isn't religion defined, at least extensionally, by the things typical of religion?

I don't think the concept of religion helps very much here. Better to just say that AI hype is a form of collective irrationality or delusive behaviour, if that's what he means.

I think this includes a number of questionable assumptions built into the idea of 'human level intelligence'. The models we have now are very good at doing some things that humans struggle with, but are also completely incapable of some things that are trivial for humans. There isn't a unified 'intelligence' where we are at a specific level, and machines are approaching. Rather, human intelligence is a highly-correlated cluster of aptitudes; aptitudes which do not necessarily correlate in machines. It seems at least plausible to me that existing AI models continue to get better at the sorts of things they are currently good at without ever becoming the kind of thing we would recognise as intelligent.

Now on one level that doesn't matter - I'm just suggesting that AI might keep improving without ever becoming AGI. But AI doesn't need to become AGI to cause technological unemployment, or to give some nation or other a major military advantage, or whatever else it is we're worried about. But I'd still like to know what the mechanism we're predicting for that unemployment, or military advantage, or whatever else might be, because it is not immediately obvious how a language model produces any of those things.

To be honest the existence and shape of much of this discourse continues to baffle me. There's a discourse around AI causing unemployment, even though AI thus far has not caused any unemployment, and there isn't an obvious mechanism for it doing so. Isn't the evidence so far that incorporating AI into a workplace increases workload, rather than decreases it? It's always possible that this changes, but I'd at least like to see the argument that it will, rather than it just being assumed.

The pattern seems to play out time and time again - Scott's last post about China made me want to scream something. Where is the reason to think that AI is so militarily and economically significant at all? What if this is all nonsense? Isn't this all based on a vision of AI technology that has no justification in reality?

Maybe there's an AI 101 argument out there somewhere that everybody else has read and which passed me by entirely, but right now I continue to be incredibly confused by this discourse. We made systems that can generate text and images, but which are consistently pretty crap at both. Given time I can imagine them becoming somewhat less crap, but where do they pivot or transform into the sorts of devices that could cause massive technological unemployment, or change a war between great powers?