site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

First volley in the AI culture war? The EU’s attempt to regulate open-source AI is counterproductive

The regulation of general-purpose AI (GPAI) is currently being debated by the European Union’s legislative bodies as they work on the Artificial Intelligence Act (AIA). One proposed change from the Council of the EU (the Council) would take the unusual, and harmful, step of regulating open-source GPAI. While intended to enable the safer use of these tools, the proposal would create legal liability for open-source GPAI models, undermining their development. This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI.

The definition of "GPAI" is vague and unclear, but it may possibly differ from the commonly-understood usage of "AGI" and may include systems like GPT-3 and SD.

I will be very curious to see how much mainstream political traction these issues get in the coming years and what the left/right divide on the issue will look like.

And here it is, as predicted, the managers eventually running into conflict with technology that isn't under their control.

In a sense there probably is some level of regulation that is appropriate, you don't want to live in a society where the algebra decides what you can do, where you can go, what you can say all without any actual justification beyond probability.

But for some reason I don't think the bureaucrats in Brussels fear that rather than it undermining their power. Which is why they are going with this avenue of attack: making sure only they and the people they accredit are allowed to access the tech. Can't give world changing powers to potential ennemies after all.

Ah how I miss the lull of liberalism.

Can you give me an example of how AI could undermine the power of “the bureaucrats in Brussels”?

Does the existence of openly available cryptographic tools and communication channels, in your mind, undermine the power of state security to quash dissidents? If not, why does Beijing insist on everyone using not Matrix/Element or Briar or even Telegram (with keys beyond their reach) but WeChat, where the Tovarisch Commissar can check up on you? Why do FSB and NSA and everyone else of that Big Brother mindset fight e2e encryption?

Because information asymmetry is power. Being able to go about your business unsupervised by A increases your power relative to A. Being supervised makes you, in the limit, merely an extension of A.

Largely the same principle applies to all areas where AI promises drastic improvements: any sort of generative tools, content curation tools, personal assistants, scientific instruments, CAD, robot control software, you name it. Everything that multiplies individual agency, you would want to be run on instances you can trust without the pinky-promise of industrial-security complex and unaccountable bureaucracies, who can either blacklist your content for some self-determined thoughtcrime, or manipulate the public into approving the criminalization of your thoughts.

Because then it improves the tradeoff between autonomy from them and your quality of life, and autonomy is inherently valuable (not to utilitarians I suppose, but to normal people of Western cultural background). And people would be able to afford more of it, and contribute less to the incumbent actors, and hope to see them diminish and become non-threats.

Hell, why do you think we've even moved over to a website managed by Zorba, from the cozy Advance Publications, Inc. property where the «Director of Policy» has Atlantic Council background? Granted, that's still a compromise, the server being physically controlled by DigitalOcean which just has less but not zero interest in narrowly policing its clients than Reddit in controlling its product for advertisers. Had Zorba tools to become a 10X programmer in his free time (if you consider yourself one, Zorba, then amend that to 100X) – we'd probably be on some insanely robust but still convenient p2p platform, with... I dunno, frontend hosted on magnet links served from ETH smart contract, and backend provided by a distributed swarm of anonymous TOR-routed plain key-value storage data nodes, with every post in a tree PoW- and reputation-economy secured and moderation decisions overlayed on top of it like a voluntary custom filter, cryptographically signed of course...

Which would be an inherently hard target, expensive to shut down for any aspiring Keffals equivalent or even for ADL-level, Western nation state level threats. And that'd become the norm.

AI democratization is the logical next step of the entire FOSS and OSH project, and libertarian, anarchist and communalist political paradigms – go read Bakunin or Rothbard or Hoppe or Stallman, if you want. Right now, you've got to accept material and political sacrifices on par with the Amish to achieve (some qualified) off-grid autonomy, and nobody's willing to go that far; and they, too, exist solely at the sufferance of the oligarchy. At some point in the near future, AI labor multipliers will allow small and distributed communities with limited economic output to maintain the quality of life currently available only for good citizens of large nation states with big corporations; even allow them security, if not from major players then from random crime, if they opt out of the state protection racket. Sure, by that point those good citizens will enjoy hyperpalatable Netflix4D propaganda streaming over Neuralink into their Metaverse pods. But will the marginal increase in marketable quality be worth continuing to slurp down dreary propaganda? The tradeoff changes, and more people become willing to jump ship.

You conveniently assume linear or superlinear returns to capability, where AI will necessarily benefit the incumbent actors even more than commoners. That's not how proliferation works. Just like a 50 Mt warhead is not 100x more of a deterrence than a 500Kt one (assuming similar ability to deliver it to the adversary's capital). There are many asymmetric functions for defense, logistic curves for human preference, and plenty of uncertainty in the future.

A good route, an escape from this path-dependent road to serfdom, is – to put it mildly – not very likely.

But I don't yet see how it is impossible, and if it is possible, it's only via commodified AI.

Wow man, you are on fire today.

I sometimes disagree with your stuff, but more usually am not sure what to think as the inscrutability is too much to bear -- this is different.

Not sure what you've been doing to hone your edge, but I look forward to more of it. (whatever that is)

Does the existence of openly available cryptographic tools and communication channels, in your mind, undermine the power of state security to quash dissidents?

Not really.

The US government did a perfectly fine job of crushing the alt right, and it had nothing to do with their communications not being secret enough.

If not, why does Beijing insist on everyone using not Matrix/Element or Briar or even Telegram (with keys beyond their reach) but WeChat, where the Tovarisch Commissar can check up on you? Why do FSB and NSA and everyone else of that Big Brother mindset fight e2e encryption?

A variety of reasons. I'm quite certain that they could get by even with e2e encryption being easily and publicly accessible though.

Largely the same principle applies to all areas where AI promises drastic improvements: any sort of generative tools, content curation tools, personal assistants, scientific instruments, CAD, robot control software, you name it.

So... how are any of these things going to help you achieve your desired anti-establishment political aims? Is your AI assistant going to put a reminder on your calendar telling you when it's time to take your AI robot buddies and go storm the palace? What happens when the palace guards have bigger and better AI robot buddies?

I'm not really trying to be cheeky. I'm just asking you to describe in sufficient detail what you're imagining. People thought throughout history that lots of different things were going to revolutionize human relations and put an end to tyranny - democracy, reason, public education, communism. None of them did. We're mostly still dealing with the same old shit that humanity has always dealt with. You can't just stop at "AI is awesome and I want it". You need a concrete argument for why things will actually be different this time - otherwise you end up with the classic communist problem where everyone just assumed "well of course if you tear down existing society then everyone will spontaneously rearrange themselves into new social relations that are perfectly just and equitable" without actually stopping to consider the details of how that was going to work.

You conveniently assume linear or superlinear returns to capability, where AI will necessarily benefit the incumbent actors even more than commoners.

Of course it will necessarily benefit the incumbent actors. The US has a rather high rate of gun ownership, and who do guns benefit more? The people or the government?

I'm not really trying to be cheeky

Guess you're just naturally good.

If I want to get some snarky demoralization content to the effect of «Russische Ivan, Rücken nach unten» or «come out and drink your corn syrup», I can go talk with @2rafa (actually looking forward to it). The topic of a hypothetical causal chain getting us from here to there is interesting, but I don't feel like addressing it in depth when you dismiss already present evidence against your model, i.e. efforts of incumbent actors to maintain their tech advantage, with a «not really» and «various reasons».

Is your AI assistant going to put a reminder on your calendar telling you when it's time to take your AI robot buddies and go storm the palace?

It'll do normal reminders, just without inserting propaganda and advertisement to alienate my children against me and eventually convince them that they're trans BLM crusaders. If that's how you want to frame it, though I believe someone like Rob Dreher would be more receptive to such a sales pitch.

If I had to update my beliefs every time I encountered evidence against them, I'd be able to hold very few beliefs about anything of importance.

As a general methodological point, I don't think there's anything objectionable about noting that you don't find an argument convincing, even though you're not prepared to give a fully-formed response to it.

I don't think there's anything objectionable about noting that you don't find an argument convincing, even though you're not prepared to give a fully-formed response to it.

Agreed.