This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
More developments on the AI front:
Big Yud steps up his game, not to be outshined by the Basilisk Man.
Now, he officially calls for preemptive nuclear strike on suspicious unauthorized GPU clusters.
If we see AI threat as nuclear weapon threat, only worse, it is not unreasonable.
Remember when USSR planned nuclear strike on China to stop their great power ambitions (only to have the greatest humanitarian that ever lived, Richard Milhouse Nixon, to veto the proposal).
Such Quaker squeamishness will have no place in the future.
So, outlines of the Katechon World are taking shape. What it will look like?
It will look great.
You will live in your room, play original World of Warcraft and Grand Theft Auto: San Andreas on your PC, read your favorite blogs and debate intelligent design on your favorite message boards.
Then you will log on The Free Republic and call for more vigorous enhanced interrogation of terrorists caught with unauthorized GPU's.
When you bored in your room, you will have no choice than to go outside, meet people, admire things around you, make a picture of things that really impressed with your Kodak camera and when you are really bored, play Snake on your Nokia phone.
Yes, the best age in history, the noughties, will retvrn. For forever, protected by CoDominium of US and China.
edit: links again
For another angle on this problem: looking at GPUs isn't going to be good enough. Maybe consumer grade GPUs are the best widely available chip architecture we have for running AI today but there's no reason that has to always be the case. If AI is going to be as important for the future as Yud claims there is going to be immense pressure to develop custom chips that are more optimized for running AI. This is basically the progression that happened in Bitcoin mining.
You can't just track chips with a particular architecture because we can make new architectures! To do this effectively you'd need to track every chip fabrication facility in the world, examine every architecture of chip they make, and somehow figure out if that architecture was optimized for running some AI software. Even if this monitoring infrastructure were in place, what if some entity comes up with some clever and heretofore unseen software+hardware AI pair that's super efficient? Are we going to never allow any new chip architectures on the off chance they are optimized for running an AI in a way we can't detect?
For nuclear weapons we at least have the ability to identify the necessary inputs (uranium and means of enriching it). For AI, do we even have constraints on what it's software will look like? On how chips that are optimized for running it will be structured?
They did this, they're called he A/H100 and ai chip architecture is super-moores law. I went to an NVidia data science thing semi-recently, that this isn't already being debated in congress tells me we're not going to be anywhere near fast enough.
More options
Context Copy link
Yuddites actually thought in detail about all of this, it's not like coming up with designs for world domination is hard work or needs any «research» that Yud allegedly conducted. Chokeholds are obvious. In his now-infamous TIME article Yud explicitly proposes lowering the legal compute budget with every advancement in sample efficiency.
Zimmerman expresses the common yuddite sentiment with regard to private GPUs.
There's plenty of space at the bottom. They haven't even started on the approach outlined by Krylov, years ago:
It's not like they haven't started thinking along similar lines, however.
More options
Context Copy link
I think this was already proposed by Yud and Roko and the like; regulate the hell out of the like three-to-seven chipmakers in the world and you'd already have a major pillar of the anti-AI regime.
Hence why Roko proposed a massive GPU buyback. No, I don't think it'll particularly work (at least not 100%), though I suppose it could be somewhat more effective than the typical gun buyback.
More options
Context Copy link
More options
Context Copy link
They're not; you want what Google is calling a "TPU" and what NVidia is calling a "Tensor Core GPU" - operations on ridiculously coarse data types at ridiculously high speeds. Science+engineering simulations want FLOPS on 64-bit numbers, and video games want 32-bit, but AI is happy with 8-bit and doesn't even seem picky about whether you use 4 or 5 or 7 bits of that for mantissa.
I'd guess a cap on FLOPs (well, OPs, on whatever datatype) and another on memory bandwidth would work for the current software paradigm, for "serial" (as much as you can call a chip with 64k multipliers "serial") runs ... except that neural nets parallelize really well, and there's probably still a lot of room to improve interconnect bandwidth and latency, and if you do that well enough then at some point you don't care so much if there's a cap on serial execution speed. Human latency is enormous; no need to beat it by too many orders of magnitude.
The depressing bit is that the "hardware" side of the pair might be "just reuse the existing hardware with this new super efficient software". Even if the initial cap is low enough that we can't get to an AI smart enough to "foom" itself, if there are clever and heretofore unseen software improvements possible (and that's the safe way to bet) then human researchers will hit on them themselves eventually.
More options
Context Copy link
Tracking every chip fab in the world isn't really that hard an ask, at least for fabs making chips in the last 3 or 4 generations.
Chip development and production is incredibly centralized and bottlenecked, simply having control over Intel and TSMC would cover most of it, and if you could get China on board, it would be trivial to monitor the smaller players.
ASICs are usually quite obviously optimized for a particular function, so I doubt that we'd ever end up in a situation where we have both a novel AI architecture that we're somehow unaware of, and enough new custom made chips to train and run it on without it being blatantly obvious.
Also, there really isn't much diversity in architectures in the first place, let alone rogue actors with the technical skills to pull out an equivalent of x86 or ARM and then implement it in-silico.
That's true, but the number of clusters on the scale required to train SOTA models like GPT-4 and above has to be very limited, maybe dozens to a few hundred at most. I doubt that's an obstacle to a determined state apparatus.
That's leaving aside unexpected algorithmic breakthroughs that let far smaller or even consumer hardware run powerful models of course.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link