site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

More developments on the AI front:

Big Yud steps up his game, not to be outshined by the Basilisk Man.

Now, he officially calls for preemptive nuclear strike on suspicious unauthorized GPU clusters.

If we see AI threat as nuclear weapon threat, only worse, it is not unreasonable.

Remember when USSR planned nuclear strike on China to stop their great power ambitions (only to have the greatest humanitarian that ever lived, Richard Milhouse Nixon, to veto the proposal).

Such Quaker squeamishness will have no place in the future.

So, outlines of the Katechon World are taking shape. What it will look like?

It will look great.

You will live in your room, play original World of Warcraft and Grand Theft Auto: San Andreas on your PC, read your favorite blogs and debate intelligent design on your favorite message boards.

Then you will log on The Free Republic and call for more vigorous enhanced interrogation of terrorists caught with unauthorized GPU's.

When you bored in your room, you will have no choice than to go outside, meet people, admire things around you, make a picture of things that really impressed with your Kodak camera and when you are really bored, play Snake on your Nokia phone.

Yes, the best age in history, the noughties, will retvrn. For forever, protected by CoDominium of US and China.

edit: links again

For another angle on this problem: looking at GPUs isn't going to be good enough. Maybe consumer grade GPUs are the best widely available chip architecture we have for running AI today but there's no reason that has to always be the case. If AI is going to be as important for the future as Yud claims there is going to be immense pressure to develop custom chips that are more optimized for running AI. This is basically the progression that happened in Bitcoin mining.

You can't just track chips with a particular architecture because we can make new architectures! To do this effectively you'd need to track every chip fabrication facility in the world, examine every architecture of chip they make, and somehow figure out if that architecture was optimized for running some AI software. Even if this monitoring infrastructure were in place, what if some entity comes up with some clever and heretofore unseen software+hardware AI pair that's super efficient? Are we going to never allow any new chip architectures on the off chance they are optimized for running an AI in a way we can't detect?

For nuclear weapons we at least have the ability to identify the necessary inputs (uranium and means of enriching it). For AI, do we even have constraints on what it's software will look like? On how chips that are optimized for running it will be structured?

are will look like? On how chips that are optimized for running it will be structured?

They did this, they're called he A/H100 and ai chip architecture is super-moores law. I went to an NVidia data science thing semi-recently, that this isn't already being debated in congress tells me we're not going to be anywhere near fast enough.

Yuddites actually thought in detail about all of this, it's not like coming up with designs for world domination is hard work or needs any «research» that Yud allegedly conducted. Chokeholds are obvious. In his now-infamous TIME article Yud explicitly proposes lowering the legal compute budget with every advancement in sample efficiency.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Zimmerman expresses the common yuddite sentiment with regard to private GPUs.

There's plenty of space at the bottom. They haven't even started on the approach outlined by Krylov, years ago:

How is this achieved? In different ways, but the main thing is disarmament and putting things under control. That is: they take away everything that can harm the state from the commoner, and control everything that can be used to cause such harm. In the end, everything will be taken away from him and everything will be controlled too.

The same can be said, for example, about free time. If the commoner used it, he could think or invent something harmful. The chance is admittedly tiny, but it exists. So, for example, the working day is not reduced, but there are lots of activities that you can - and want! - to indulge in after-hours: from watching TV series to travelling ("it's necessary to see everything").

However, all this is obvious. I would like to talk about how the average person will be deprived of the last dangerous resource that is still in his hands - namely, money..

Indeed. Money is a very dangerous thing. With money, you can buy everything, including any serious possessions. For example, you can buy good steel, a diamond drill machine, and something on top of that. And you can make a primitive yet working firearm. In the hyperinformative era, you may get the blueprints of a peashooter ... And then, with this thing, a man will shoot at the godlike punisher-polizei. Or at a respectable, esteemed black ripper who has become used to killing people "as much as needed", because he is a black Muslim and has a gun (of course, without a chip), and white people do not even have a knife, only soft chubby legs and arms. > Or - a dearest child of an important European official, who adopted laws under which all of the above became possible. By the way, one guy did it, his name was Breivik. He, truth be told, legally bought a ready-made weapon. But he could have made it himself too.

It seems that the easiest solution is to ban all dangerous things. Weapons, machine tools, explosive components, etc. But here is the trouble: there are more and more things in the world, and it is not clear which of them are safe and which are not. The nomenclature is too big, you can't keep track of everything. Besides, explosives and poisonous substances are only a small part of dangerous things. For example, computer programs - don't they come in all kinds? and who knows all they can do? And you can buy them for money, too. Well, that's it. Money - real money for which you can buy everything - should be taken from the commoner. In toto. And he should be given something else, simpler. Toy money with which he can only buy safe things, toys, and a little pleasure.

How? Well, just like this.

We already got used - in Russia too - to such a small but pleasant little thing as bonus points, loyalty card crediting and other such niceties. Should you come to the tavern seven times, you will get a mug of free beer, a cookie or a discount from the main account on the eighth. These little toy moneys, at this point they are mainly utilized to goad the client into spending the real money. …

Consider: someone is issuing (why and by what right - let's omit these details for now) some conditional unit of exchange the "cost" of which is defined as 0.5 points of a card of some central Moscow store, plus 0.3 discount points of a popular restaurant chain, plus 0.2 points of a chain of women's clothing and perfume. It should be noted that money – roubles or dollars – is out of the question, only points. This conventional unit is called, say, "Sale". Sale is exchanged at trades – arranged as exchange currency trades – for other points, and (for the time being) is bought for money. And it is profitable: by buying Sales, you can buy more goods and services than for money directly. By three percent, not more - but they won't hurt, right?

Now imagine that you are offered to cover part of the price with Sales, on favorable terms. You save five percent, or even whole ten percent. However, you will be able to shop only there where Sales are accepted. But you it's not like you need anything else. You need to eat, get dressed, go to the cinema... you can get all this for Sales. And real money will be needed to pay taxes and for communal housing.

Why such difficulties? Little by little, one travels far. Dangerous toys - dollars, rubles, euros - should be taken away from people gradually, and in such a manner that they themselves give them up. In the end, the toy "Sales" will actually cover the basic needs of an ordinary citizen, and even ten percent above that. Do you insist on getting paid in real money? You will not be refused. But there will be rumors that you are saving money for something bad. Maybe for drugs? Or are you sending it to terrorists? Your account might begin to attract interest. And you personally, too. No, no, no, if you have nothing illegal, no one will bother you. Why do you need money anyway? Oh, you're saving for a diamond ring for your loved one? Well, we didn't foresee that, so we'll suggest to the jewelers that they put special points on diamonds and include it in the Sale basket.

It's not like they haven't started thinking along similar lines, however.

To do this effectively you'd need to track every chip fabrication facility in the world,

I think this was already proposed by Yud and Roko and the like; regulate the hell out of the like three-to-seven chipmakers in the world and you'd already have a major pillar of the anti-AI regime.

Hence why Roko proposed a massive GPU buyback. No, I don't think it'll particularly work (at least not 100%), though I suppose it could be somewhat more effective than the typical gun buyback.

Maybe consumer grade GPUs are the best widely available chip architecture we have for running AI today

They're not; you want what Google is calling a "TPU" and what NVidia is calling a "Tensor Core GPU" - operations on ridiculously coarse data types at ridiculously high speeds. Science+engineering simulations want FLOPS on 64-bit numbers, and video games want 32-bit, but AI is happy with 8-bit and doesn't even seem picky about whether you use 4 or 5 or 7 bits of that for mantissa.

somehow figure out if that architecture was optimized for running some AI software

I'd guess a cap on FLOPs (well, OPs, on whatever datatype) and another on memory bandwidth would work for the current software paradigm, for "serial" (as much as you can call a chip with 64k multipliers "serial") runs ... except that neural nets parallelize really well, and there's probably still a lot of room to improve interconnect bandwidth and latency, and if you do that well enough then at some point you don't care so much if there's a cap on serial execution speed. Human latency is enormous; no need to beat it by too many orders of magnitude.

what if some entity comes up with some clever and heretofore unseen software+hardware AI pair that's super efficient

The depressing bit is that the "hardware" side of the pair might be "just reuse the existing hardware with this new super efficient software". Even if the initial cap is low enough that we can't get to an AI smart enough to "foom" itself, if there are clever and heretofore unseen software improvements possible (and that's the safe way to bet) then human researchers will hit on them themselves eventually.

Tracking every chip fab in the world isn't really that hard an ask, at least for fabs making chips in the last 3 or 4 generations.

Chip development and production is incredibly centralized and bottlenecked, simply having control over Intel and TSMC would cover most of it, and if you could get China on board, it would be trivial to monitor the smaller players.

ASICs are usually quite obviously optimized for a particular function, so I doubt that we'd ever end up in a situation where we have both a novel AI architecture that we're somehow unaware of, and enough new custom made chips to train and run it on without it being blatantly obvious.

Also, there really isn't much diversity in architectures in the first place, let alone rogue actors with the technical skills to pull out an equivalent of x86 or ARM and then implement it in-silico.

That's true, but the number of clusters on the scale required to train SOTA models like GPT-4 and above has to be very limited, maybe dozens to a few hundred at most. I doubt that's an obstacle to a determined state apparatus.

That's leaving aside unexpected algorithmic breakthroughs that let far smaller or even consumer hardware run powerful models of course.