site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

First volley in the AI culture war? The EU’s attempt to regulate open-source AI is counterproductive

The regulation of general-purpose AI (GPAI) is currently being debated by the European Union’s legislative bodies as they work on the Artificial Intelligence Act (AIA). One proposed change from the Council of the EU (the Council) would take the unusual, and harmful, step of regulating open-source GPAI. While intended to enable the safer use of these tools, the proposal would create legal liability for open-source GPAI models, undermining their development. This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI.

The definition of "GPAI" is vague and unclear, but it may possibly differ from the commonly-understood usage of "AGI" and may include systems like GPT-3 and SD.

I will be very curious to see how much mainstream political traction these issues get in the coming years and what the left/right divide on the issue will look like.

Who cares what the EU thinks? The EU is so far behind China and the US in AI, it will never catch up. The reason they're so far behind is because of this fetish for regulation.

https://twitter.com/punk6529/status/1509832349986562048

Consider that the entire EU technology sector is worth about 30% of just one of the biggest American tech companies. That was 9 months ago, so maybe they're up to 50% of Apple by now.

https://www.forbes.com/sites/markminevich/2021/12/03/can-europe-dominate-in-innovation-despite-us-big-tech-lead/?sh=7b22ce91d75c

Can Europe dominate in innovation? No.

As an example of the principles leading to heavy regulation in the EU, see the precautionary principle, which is a major idea in EU law but almost alien to the US or China:

https://en.wikipedia.org/wiki/Precautionary_principle

The classic American principle is "innovate and break stuff, with regulators acting if you break something serious." The precautionary principle, in the extreme, is "we do not need to know that you are breaking stuff to stop you from innovating." Or, as Stewart Brand put it:

"When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public [meaning the regulators, not the public in the sense of the people in general], should bear the burden of proof."

The EU regulators and lawmakers have decided that this principle is worth the price of being a tech industry backwater.

In general, a secular gap has developed between the EU economy and the US economy since the mid-1990s:

https://statisticstimes.com/economy/united-states-vs-eu-economy.php

Per capita, the gap has always been big, but it's become really big, especially in nominal terms, where the EU has been stagnant since about 2008.

The less players in the AI game the less impossible it is that humanity will survive.

The more players in the game the more likely we are to have an AI on our side of the equation.

Why should anyone care about humanity in the abstract?

Because sentience is the source of beauty, growth, belief, and all good in the world. Humans may very well be the only sentient beings in the galaxy, or universe. We can't tell from a sample size of one.

It would be a shame to lose all good things, therefore humanity's survival is good. Without humanity the idea of good would be lost, potentially forever. I would ask you why not care about humanity in the abstract?

Without humanity the idea of good would be lost, potentially forever.

«Humanity in the abstract» guarantees nothing more than the existence of complex machines with some reward functions; this kind of «the idea of good», by itself, doesn't do anything for me. To automatically value the existence of estimators of value is mind-boggling idiocy, circular reasoning and probably a category error, in my opinion – it straight up doesn't compute. Even utilitarians tend to recognize that they're bootstrapping a formalism from baseline human intuitions, for satisfaction of baseline human intuitions.

I would ask you why not care about humanity in the abstract?

Because a) states of the Universe have inherent aesthetic and moral value that can be appreciated in advance, and b) I assign no value, and in some cases negative value, to beauty, growth, belief and good things enjoyed by my enemies and moral abominations, to the extent that they have access to those notions at all. I'm not alone; google «Heaven and Hell».

Thus, it's not «humanity» but only «humanity that's aligned with me» whose survival matters. To make it clearer: friendly humanity surviving > empty and dead Universe > Universe populated by very satisfied but unfriendly humanity, paperclippers, orgasmium and other sorts of radically misaligned scum, in no particular order. With all this implies.

Less abstractly, this is a somewhat radical way to claim a stake in the future. I refuse to cooperate with clever defectors who will try to sell me, and others, on cooperation in the name of some utopian humanity stemming from them and inheriting their, but not my own and not anyone else's, values and individualities. This includes opposition to their ghoulish and duplicitous propaganda of utilitarian altruism, which inherently devalues individuality and agency, in favor of quantification of sensations experienced by... oh, don't worry about that, it's not your business.

I opt to make it my business.

Interesting critique of utilitarianism at some parts, but I can’t wrap my head around this:

to automatically value the existence of estimators of value is mind boggling idiocy

I can understand not wanting your enemies to prosper etc, and forming a sort of game theory strategy of defecting in advance when it seems the only other option is to get rolled over eventually. That being said… from your first bit about essentially seeing humans as robots that are not inherently worth anything whatsoever, does that mean you would welcome human extinction?

From what you’ve written it seems like under your worldview an asteroid hitting Earth tomorrow would be overall good because it would kill more of your enemies than people who’s worldview aligns with yours. Am I mistaken here?

I guess that's one of the reasons. If you can't reduce the technological lag through competition and innovation, you are even more exposed to strategic dependencies on foreign technologies. "AI protectionism": you set up a regulatory filter to protect from foreign tech and to give at least some advantage to domestic innovators.

As @arjin_ferman observes, this is in line with my more pessimistic scenarios. What is AGI, people ask? Why don't they just click the link? But to be fair, it took me a little while to discover the actual definition, here («WITH PRAGMATISM AGAINST POPULISM & STAGNATION», lmao):

'general purpose AI system' means an AI system that - irrespective of the modality in which it is placed on the market or put into service, including as open source software - is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems;

It's trivial to realize how this applies to large language models like the GPT series, to say nothing of multimodal systems. We don't have to get even to GATO sorts of multitask training. If anyone thought the cooling effect will only start close to what we intuitively recognize as human performance: think again.

I advise people to notice how synchronized the push against individual agency enhancement is, and it's not, contra the insistence of quokka-economists, explained by innocuous market reasons like economies of scale and data moats. In the US, you have the EA movement with their longtermism, fearmongering, advocacy for «compute governance» and «pivotal acts», and independently from that – politicized corporate AI safety/fairness divisions that'll probably be used to distinguish «responsible actors» and delegitimize smaller ones on the next legislation cycle (like with Oscars: not everyone can afford the demanded diversity package). In the EU, you have this regulation circus building on pop of earlier anti-American big tech rackets masquerading as customer protection. Of course Bostrom's hand is traceable to both sides of the pond, via WEF in the Old World and LW cluster in the New. In Russia... well, if we'll have Russia still on the map in two years, they'll do good if they don't start burning their remaining ML talent for witchcraft; they're also shut out of international markets and can't acquire new compute. Japan is «LOL», as @gwern (not with us I assume) puts it – they don't do any AI R&D worth mentioning, aren't sovereign, and will meekly follow Western lead. In China, as gwern again points out, the newest American export regulations will increase the relative (although not absolute) capacity of central government and big tech, which are already paranoid and illiberal to the highest degree:

The second-order effects here would seem to confirm Chinese autarky and trends towards secrecy, and further, to shift power from Chinese academia/small businesses/hobbyists/general-public to Chinese bigtech and thus, the Chinese government. If you've been following along, the big megacorps, especially in the wake of the attempted US execution of Huawei, have been developing their own DL ASICs for a while with an eye towards exactly this sort of scenario. [...]

If you are rich and well-connected and can finance the lobbying and guanxi and paperwork, you'll be able to get access to compute, one way or another, while the small guys can no longer click 'buy' on nvidia.com or just negotiate their usual datacenter orders and will pay higher costs or go without. It's the same reason why things like GDPR always wind up hurting FANG less than the activists expect (and hurt small actors like NGOs or startups much more), why 'regulatory capture' exists and why big actors often actively lobby for more regulation. It's going to be much harder and more expensive to get Nvidia GPUs or to get proprietary hardware (can you buy a TPU from Google? no, you cannot), therefore, small actors like hobbyists will be systematically disadvantaged and many priced out.

The rest of the world (sorry fellow rest-of-worlders) is comprised of some shades of shithole and Western cryptocolony, wracked by climate disasters, brain drain and, crucially, global economic crisis triggered in no small part by the EU/American/Chinese COVID policies and now the war, extremely vulnerable even to half-hearted sanctions, and won't have the wherewithal to do ML research at scale.

Well, there are exceptions of course, hilariously two exceptions validating priors of Russian conspiracy nuts.

One is a dystopian surveillance state with legendary intelligence services and diplomatic acumen, a history of attempting and partially pulling off ludicrously illiberal tech regulations, but not (yet) any de facto obstruction on advanced AI research for smaller actors; the island where core DeepMind staff is physically located, and Stability.AI incorporated.

The other is a militarized ethnostate with infamously capable intelligence agencies, world-class lithography fabs, world-leading STEM&software talent, brazenly self-interested and defiant of international regulations, not beholden to NATO or really any other alliance, with a good track record in clandestine WMD development and non-signing of non-proliferation treaties and their equivalents.

So: USA, UK, Israel, maaaybe China if (and that's a big if) it doesn't immolate itself with its own bureaucracy and Special Military Operation in a few years. These three and a half – more like one and a half – actors will split the future of the light cone, the way it's going.

That's to be expected of course. Individual agency is a threat to big structures, always has been. Even allowing escape is a threat, an infinitely big threat when multiplied by longtermist numbers and existential anxiety. There used to be a great Motte-adjacent blog, 451somethingsomething, a few years ago, with a good article of the alienation the author felt when he noticed the vibe of eusociality and hivemindedness in the society around him, his own obsolescence as a stubborn independent cell. Not edgy, just desperate. That's kind of how I feel now.

You're quite right about restrictions closing in.

I just spent 20 minutes trying and failing to install stable diffusion (due to poor technical skills). It's amazing how much sanctimonious nonsense they can put in their license. They go on about being biased towards White/Western prompt language and announce that it probably cost about 11 tonnes of CO2 to produce the model, as though anyone cares. I don't understand why you would release software for free, on a very open license and then put in this stuff. Wouldn't releasing your model like that mean you're libertarian-leaning?

The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.

Of course, there's already a general in 4chan's /h/ about it. h stands for precisely what you would expect a novel art-creation AI tool to be used for.

I wouldn't read much into it: they have to cover their asses (it doesn't work all that well). And yes, people care about CO2; 11 tons isn't a lot, some models can burn the equivalent of a small city's energy budget, and we need to encourage energy-efficient approaches (hopefully, there's also the added benefit of slowing down major corporate players with near-infinite compute budget, who can just keep scaling).

Emad personally, at least, has a pretty mature (IMO) understanding of ethics around personal freedom, in fact philosophy is one of his specializations. But he clearly doesn't like White supremacy a great deal, being a Bangladeshi, and might really appreciate debiasing of the dataset towards a statistically accurate representation of human phenotypes across the globe. Just under 10% of the global population are white, after all (you'd never tell that from a random English-annotated image content sample).

I think it's time to come to grips with the fact that some of the other 90+% are enabling technical breakthroughs with their own people in mind.

Try some retard-proof guide or prebaked SD-based executable, I'm sure there was one linked on /r/StableDiffusion.

And don't give up after 20 minutes. Come on man, doing something for the first time can take orders of magnitude longer.

Yeah I found an exe shortly after. The technical skill I lacked was the wisdom to look for an easier path. It's good fun to play around with.

But to put the C02 into comparison, some friends of mine put 17 tonnes of C02 into the atmosphere just today with international flights. Your average climate summit probably has a carbon footprint similar to the larger models. Slowing down the big players is nice but the silliness of it irritates me.

And do these people think a primary use-case isn't nuding celebrities or making 'stereotypical' content? Credit where credit's due, the version I got didn't have blockers on it. But it's like making a set of monkey bars and forbidding children to climb on top of them since they might fall. One, it goes against the point and two, it won't be obeyed. A rule made to be despised.

Is it so clear that this is one of those domains where present trends can be meaningfully extrapolated? Theoretically, bringing any modern country up to the cutting edge of AI amounts to transferring a few TB of data and one container-load of GPUs; if you also want people who can understand and iterate on it, perhaps add a classload of people and at most 8 years of training. The people behind the likes of DeepMind do not strike me as actual +4\sigma-on-the-g-distribution individuals that are in genuine short supply globally to anyone that is not the Cathedral as much as moderately smart people who were at the right place at the right time with the right motivations. As I see it, it's not clear that it wouldn't take just one Chinese drone swarm flying too close to the wrong rock for Japan remember the last few times they surprised powers that wanted a slice of their sleepy islands by sudden and very prolific technological copypasta.

Also, I wanted to thank you for linking the Complex Numbers songs in that previous post of yours you linked. Pretty neat, and colours in my mental image of the direction the Russian offshoot of the rationalist community must have moved in. Did you have anything to do with the people behind it?

There used to be a great Motte-adjacent blog, 451somethingsomething, a few years ago, with a good article of the alienation the author felt when he noticed the vibe of eusociality and hivemindedness in the society around him, his own obsolescence as a stubborn independent cell.

Status451? That guy who wrote the book review on Days of Rage?

Right, Ghosts in Every Machine

Last month was strange and horrifying. A guy with an interesting and novel project wanted to talk about it at a conference. A conference run by a solid, upstanding tech leader. And then everyone lost their shit. Suddenly, out of nowhere, everything was crazy. All I wanted to do was protect a conference I’ve enjoyed in the past, to do a nice thing for a guy who made a mistake in the eyes of the public. The next thing I know, I’m surrounded by zombies. News reporters made up lies about us. Communists on the internet joked (haha-no-but-really) about sending us to gulags. Coworkers of mine, not knowing who I am, told me to my face about this “crazy blog defending a horrible bigot,” and how they’re glad there aren’t any terrible people like that in our office. I’ll be laughing for a long time about how I’m officially certified “not supremacist” by the SPLC.

This is insanity. Why did these people do these strange things? Why did people I knew and trusted, interacted with daily, turn into horrible people yelling for my head? The most confusing part was their general ignorance of the details of the situation. Very few of them knew why they should be upset. None of them had ever read the speaker’s offending blog, and few of them had so much as seen the offending quotation. All they knew was that we’re the bad guys, and need to be punished.

Our critics are a part of something bigger than themselves. They’re keyed in to the Waze app, being the human serpents, while my Motorola flip-phone struggles to run the snake game. And why wouldn’t they? At every step along the way, it makes sense. Who cares why the narrative seems a little too perfect, they’re happy. It works for them. Their needs are met. By playing their part, responding to the signals in the memetwork, they enjoy health and happiness, wealth and social status. It would be stupid not to go along.

We here at Status451 have never really fit in. The signals are mangled by the mountains here in Zomia. We’re the single cells. The behaviours of everyone else made no sense to us, and the results were frightening. We can’t see the complex internal signals.

When the mass of cells is bearing down on you, just like in Simon-spore, you do have an option. You have mobility. Freedom. Our critics, keyed into the signal of their culture war narrative, gain a lot of benefits. They get their social needs provided for, in exchange for being the lifeblood of their egregore. But that is the cost: they must be the egregore. They lose the freedom to go their own way. We here have chosen the other path. Maybe “chosen” isn’t quite the right word; I’ve tried my whole life to fit in, be normal, and it just doesn’t work. But our other path, chosen or not, gives us the freedom to see things differently. We can be the masters of our own fate, hold a deeper, fuller agency over our lives. As long as we don’t wake the deep faceless things.

Everyone loves weaponization narrative. Sure, every action, performed by a rational actor (even more so by coalition of actors) is calculated to secure their cozy status quo or disrupt rival one. But know what? Absence of any EU legislation would signify the same weaponization, successfully carried out by other actors like AI-powered businesses. They would have lobbied their way toward just right degree of individual agency, basically any degree you wish to pay for. Monthly agency subscription, pretty interface, but they would also hoard some details of your precious agency in the background, for its safety and for better recommendations, and maybe some other things.

Every disruptive technology would be weaponized, rest assured. And not only technology itself – that’s the bread and butter of technocrats – folks of more modest means weaponize the mere threat of technology. Even your natural claim for agency is already part of a standard sjw toolset you would sneer at in other circumstances.

Instead of pamphleteering away regulatory motion, I’d first explore the strategies they devise, and where exactly this tide is moving. Or do you already have a good strategy of decentralized resistance?

One is a dystopian surveillance state with legendary intelligence services and diplomatic acumen, a history of attempting and partially pulling off ludicrously illiberal tech regulations, but not (yet) any de facto obstruction on advanced AI research for smaller actors; the island where core DeepMind staff is physically located, and Stability.AI incorporated.

Would be hilarious if it actually turns out that Brexit was the singular event that allowed UK to remain sovereign. It would vindicate the deplorables as nothing else.

History is full of these weird little accidents and ironies, like Mussolini getting ousted in the end by the symbolic grand council of fascism he created.

The only clause about “open source” I found in EU reports, says that current regulations should apply irrespective of whether software is open source or not. Brookings doesn’t discuss details of regulations at all, but makes a bunch of empirical claims (I chose interesting):

  1. Open source GPAI (osai) promotes competition and erodes monopolies

  2. Regulation of osai would disincentivize its development by introducing liabilities and delays

1 Since osai has public good features, any breakthrough would be instantly adopted by everyone, but only big players have enough resources to continuously integrate and build off others’ breakthroughs. Some startups would be consumed altogether. If anything, releasing and adopting open source seems to profit monopolies more than anyone else. And curiously, Brookings admits this in their other article about benefits of osai:

At first glance, one might be inclined to think that open-source code enables more market competition, yet this is not clearly the case. […] In fact, for Google and Facebook, the open sourcing of their deep learning tools (Tensorflow and PyTorch, respectively), may have the exact opposite effect, further entrenching them in their already fortified positions

2 Most influential open source DL libraries like pytorch, tensorflow came from BigTech. And since almost every big company released its own library, it appears to be a common strategy – in a competition to entrench your own de-facto standard. Same about cloud infrastructure. Whether you like this status quo or not, it is monopolies who provide most services and tools at the moment.

Would regulation change this situation? Big players would certainly endure the bureaucratic costs, but many small but valuable innovators (esp nonprofits) might be effectively barred from releasing open source.

The document (pdf) mentions “AI regulatory sandboxes” as a measure to alleviate the burden of small entrants:

The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance […] including by removing barriers for small and medium enterprises (SMEs), including and start-ups.

Moreover, in order to ensure proportionality considering the very small size of some operators regarding costs of innovation, it is appropriate to exempt microenterprises from the most costly obligations, such as to establish a quality management system which would reduce the administrative burden and the costs for those enterprises

Would be interesting to see more substantial analysis of the regulations themselves.

And here it is, as predicted, the managers eventually running into conflict with technology that isn't under their control.

In a sense there probably is some level of regulation that is appropriate, you don't want to live in a society where the algebra decides what you can do, where you can go, what you can say all without any actual justification beyond probability.

But for some reason I don't think the bureaucrats in Brussels fear that rather than it undermining their power. Which is why they are going with this avenue of attack: making sure only they and the people they accredit are allowed to access the tech. Can't give world changing powers to potential ennemies after all.

Ah how I miss the lull of liberalism.

Can you give me an example of how AI could undermine the power of “the bureaucrats in Brussels”?

AI could undermine the power of almost anyone, if carefully applied. Bureaucrats are concerned only insofar as interest groups pressure them. I don't get this folkloric obsession with bureaucrats per se.

Does the existence of openly available cryptographic tools and communication channels, in your mind, undermine the power of state security to quash dissidents? If not, why does Beijing insist on everyone using not Matrix/Element or Briar or even Telegram (with keys beyond their reach) but WeChat, where the Tovarisch Commissar can check up on you? Why do FSB and NSA and everyone else of that Big Brother mindset fight e2e encryption?

Because information asymmetry is power. Being able to go about your business unsupervised by A increases your power relative to A. Being supervised makes you, in the limit, merely an extension of A.

Largely the same principle applies to all areas where AI promises drastic improvements: any sort of generative tools, content curation tools, personal assistants, scientific instruments, CAD, robot control software, you name it. Everything that multiplies individual agency, you would want to be run on instances you can trust without the pinky-promise of industrial-security complex and unaccountable bureaucracies, who can either blacklist your content for some self-determined thoughtcrime, or manipulate the public into approving the criminalization of your thoughts.

Because then it improves the tradeoff between autonomy from them and your quality of life, and autonomy is inherently valuable (not to utilitarians I suppose, but to normal people of Western cultural background). And people would be able to afford more of it, and contribute less to the incumbent actors, and hope to see them diminish and become non-threats.

Hell, why do you think we've even moved over to a website managed by Zorba, from the cozy Advance Publications, Inc. property where the «Director of Policy» has Atlantic Council background? Granted, that's still a compromise, the server being physically controlled by DigitalOcean which just has less but not zero interest in narrowly policing its clients than Reddit in controlling its product for advertisers. Had Zorba tools to become a 10X programmer in his free time (if you consider yourself one, Zorba, then amend that to 100X) – we'd probably be on some insanely robust but still convenient p2p platform, with... I dunno, frontend hosted on magnet links served from ETH smart contract, and backend provided by a distributed swarm of anonymous TOR-routed plain key-value storage data nodes, with every post in a tree PoW- and reputation-economy secured and moderation decisions overlayed on top of it like a voluntary custom filter, cryptographically signed of course...

Which would be an inherently hard target, expensive to shut down for any aspiring Keffals equivalent or even for ADL-level, Western nation state level threats. And that'd become the norm.

AI democratization is the logical next step of the entire FOSS and OSH project, and libertarian, anarchist and communalist political paradigms – go read Bakunin or Rothbard or Hoppe or Stallman, if you want. Right now, you've got to accept material and political sacrifices on par with the Amish to achieve (some qualified) off-grid autonomy, and nobody's willing to go that far; and they, too, exist solely at the sufferance of the oligarchy. At some point in the near future, AI labor multipliers will allow small and distributed communities with limited economic output to maintain the quality of life currently available only for good citizens of large nation states with big corporations; even allow them security, if not from major players then from random crime, if they opt out of the state protection racket. Sure, by that point those good citizens will enjoy hyperpalatable Netflix4D propaganda streaming over Neuralink into their Metaverse pods. But will the marginal increase in marketable quality be worth continuing to slurp down dreary propaganda? The tradeoff changes, and more people become willing to jump ship.

You conveniently assume linear or superlinear returns to capability, where AI will necessarily benefit the incumbent actors even more than commoners. That's not how proliferation works. Just like a 50 Mt warhead is not 100x more of a deterrence than a 500Kt one (assuming similar ability to deliver it to the adversary's capital). There are many asymmetric functions for defense, logistic curves for human preference, and plenty of uncertainty in the future.

A good route, an escape from this path-dependent road to serfdom, is – to put it mildly – not very likely.

But I don't yet see how it is impossible, and if it is possible, it's only via commodified AI.

Wow man, you are on fire today.

I sometimes disagree with your stuff, but more usually am not sure what to think as the inscrutability is too much to bear -- this is different.

Not sure what you've been doing to hone your edge, but I look forward to more of it. (whatever that is)

Does the existence of openly available cryptographic tools and communication channels, in your mind, undermine the power of state security to quash dissidents?

Not really.

The US government did a perfectly fine job of crushing the alt right, and it had nothing to do with their communications not being secret enough.

If not, why does Beijing insist on everyone using not Matrix/Element or Briar or even Telegram (with keys beyond their reach) but WeChat, where the Tovarisch Commissar can check up on you? Why do FSB and NSA and everyone else of that Big Brother mindset fight e2e encryption?

A variety of reasons. I'm quite certain that they could get by even with e2e encryption being easily and publicly accessible though.

Largely the same principle applies to all areas where AI promises drastic improvements: any sort of generative tools, content curation tools, personal assistants, scientific instruments, CAD, robot control software, you name it.

So... how are any of these things going to help you achieve your desired anti-establishment political aims? Is your AI assistant going to put a reminder on your calendar telling you when it's time to take your AI robot buddies and go storm the palace? What happens when the palace guards have bigger and better AI robot buddies?

I'm not really trying to be cheeky. I'm just asking you to describe in sufficient detail what you're imagining. People thought throughout history that lots of different things were going to revolutionize human relations and put an end to tyranny - democracy, reason, public education, communism. None of them did. We're mostly still dealing with the same old shit that humanity has always dealt with. You can't just stop at "AI is awesome and I want it". You need a concrete argument for why things will actually be different this time - otherwise you end up with the classic communist problem where everyone just assumed "well of course if you tear down existing society then everyone will spontaneously rearrange themselves into new social relations that are perfectly just and equitable" without actually stopping to consider the details of how that was going to work.

You conveniently assume linear or superlinear returns to capability, where AI will necessarily benefit the incumbent actors even more than commoners.

Of course it will necessarily benefit the incumbent actors. The US has a rather high rate of gun ownership, and who do guns benefit more? The people or the government?

I'm not really trying to be cheeky

Guess you're just naturally good.

If I want to get some snarky demoralization content to the effect of «Russische Ivan, Rücken nach unten» or «come out and drink your corn syrup», I can go talk with @2rafa (actually looking forward to it). The topic of a hypothetical causal chain getting us from here to there is interesting, but I don't feel like addressing it in depth when you dismiss already present evidence against your model, i.e. efforts of incumbent actors to maintain their tech advantage, with a «not really» and «various reasons».

Is your AI assistant going to put a reminder on your calendar telling you when it's time to take your AI robot buddies and go storm the palace?

It'll do normal reminders, just without inserting propaganda and advertisement to alienate my children against me and eventually convince them that they're trans BLM crusaders. If that's how you want to frame it, though I believe someone like Rob Dreher would be more receptive to such a sales pitch.

If I had to update my beliefs every time I encountered evidence against them, I'd be able to hold very few beliefs about anything of importance.

As a general methodological point, I don't think there's anything objectionable about noting that you don't find an argument convincing, even though you're not prepared to give a fully-formed response to it.

I don't think there's anything objectionable about noting that you don't find an argument convincing, even though you're not prepared to give a fully-formed response to it.

Agreed.

  • A fact-checker-checker

  • A Regime-meme detector

  • A metaverse scrambler

  • Automated chaff generator against "radicalization experts"

Just off the top of my head.

A fact-checker-checker, A Regime-meme detector, A metaverse scrambler

Come on man.

Automated chaff generator against "radicalization experts"

The "radicalization experts" would of course be AI bots themselves, running on more powerful server clusters than whatever The Resistance could cobble together, so they would be able to respond to both human dissidents and the AI-generated content meant to distract them, without missing a beat.

I'll take being outgunned, over being forced to bring a knife to a gunfight.

commonly-understood usage of "AGI"

Honestly, I follow this field largely peripherally, but I don't think anyone understands what "AGI" means. There's a lot of scaremongering about what "generalized intelligence" will entail (it's a classic science fiction trope!), but from what I'm seeing of the development it's not clear that "generalized" will be the sort of thing easily weaponized against us. It seems quite plausible to me that existing neural models won't inherently have any sort of long-term objectives or goals. Are emotions separable from intelligence like science fiction authors would have us believe?

As a rough comparison, I feel like an observer in 1900 watching lots of attempts at powered heavier-than-air flight, but wondering if initiatives to stop investing in roads ("we'll fly!") or railroads (we do fly sometimes) are well-timed. It seems likely someone will be successful, but exactly what it will look like is unclear, and I'm not sure we should start planning for our ornithopter overlords.

I don't think anyone understands what "AGI" means

Most "confusion" over what AGI means seems to come from people who want to shift the goalposts to make ridiculous claims (e.g. that GPT-3 is "already an AGI").

One thing that would obviously qualify an AI as an AGI is if it could do everything a human could do. Obviously this would entail that it has agency, that it has some sort of body that it can use to interact with the physical world, etc.

Maybe some less powerful systems could qualify as AGI as well, e.g. a non-embodied agent that we could only interact with through text. But the fact that there are edge cases doesn't mean that the concept of AGI is particularly difficult to grasp, or that most people don't intuitively understand what is meant by the concept.

The EU’s attempt to regulate open-source AI is counterproductive.

Alternative explanation: no it's not, they know exactly what they're doing. To be fair, at first I also thought Ilforte was being a bit of a drama queen, and now I'm just surprised I had any bit of naivete left to be brutally beaten out of me.

I'm reminded of this Twitter thread I saw a while ago: https://twitter.com/punk6529/status/1509832349986562048 describing a proposal that Europe take the stance of becoming the "regulatory leader" of AI.

It sounds like a bit of a risky proposition, to me: "if you want our business, you must abide by the rules we set that are stricter than everybody else's" sounds vulnerable, the more and more it is used, to a Laconic reply.

I dunno, it's not like EU is about to get another sort of a competitive advance. US, of course, has a general competitive advance of being the world's richest and most developed area with a tradition of being the cutting edge of the cutting edge in IT, and being able to pay the biggest salaries - EU is not going to achieve that. China, meanwhile, will presumably attempt to compete with unlimited state investment and participation, along with the ability to utilize data received from an unrestricted control society - EU is not going to compete there, either. Why not try be a regulation trendsetter, and hope this contributes to a general movement for strict AI regulation globally - so that if and when it starts happening, you'll be a trendsetter?

if you want our business, you must abide by the rules we set that are stricter than everybody else's

It works for China.

deleted