site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

Do you have a concrete argument against recursive self-improvement? We've already got demonstrated capacities in AI writing code and AI improving chip design, isn't it reasonable that AI will soon be capable of rapid recursive self-improvement? It seems reasonable that AI could improve compute significantly or enhance training algorithms, or fabricate better data for its successors to be trained upon.

Recursive self-improvement is the primary thing that makes AI threatening and dangerous in and of itself (or those who control it). I too think Yudkowsky's desire to dominate and control AI development is dangerous, a monopolist danger. But he clearly hasn't succeeded in any grand plan to social-engineer his way into AI development and control it, his social skills are highly specialized and only work on certain kinds of people.

So are you saying that recursive self-improvement won't happen, or that Yud's model is designed to play up the dangers of self-improvement?

I reject that I need to prove something as logically impossible to ward off Yud's insistence that it's inevitable and justifies tyranny. This is sectarian bullshit and I'll address it in the text if I ever finish it. I think it's very relevant that his idea of proper scientific process is literally this:

Jeffreyssai chuckled slightly.  "Don't guess so hard what I might prefer to hear, Competitor.  Your first statement came closer to my hidden mark; your oh-so-Bayesian disclaimer fell wide...  The factor I had in mind, Brennan, was that Eld scientists thought it was _acceptable_to take thirty years to solve a problem.  Their entire social process of science was based on getting to the truth eventually. A wrong theory got discarded _eventually_—once the next generation of students grew up familiar with the replacement.  Work expands to fill the time allotted, as the saying goes.  But people can think important thoughts in far less than thirty years, if they expect speed of themselves."  Jeffreyssai suddenly slammed down a hand on the arm of Brennan's chair.  "How long do you have to dodge a thrown knife?"

...

"Good!  You actually thought about it that time!  Think about it every time!  Break patterns!  In the days of Eld Science, Brennan, it was not uncommon for a grant agency to spend six months reviewing a proposal.  They permitted themselves the time!  You are being graded on your speed, Brennan!  The question is not whether you get there eventually!  Anyone can find the truth in five thousand years!  You need to move faster!"

"Yes, sensei!"

"Now, Brennan, have you just learned something new?"

"Yes, sensei!"

"How long did it take you to learn this new thing?"

An arbitrary choice there...  "Less than a minute, sensei, from the boundary that seems most obvious."

"Less than a minute," Jeffreyssai repeated.  "So, Brennan, how long do you think it should take to solve a major scientific problem, if you are not wasting any time?"

Now there was a trapped question if Brennan had ever heard one.  There was no way to guess what time period Jeffreyssai had in mind—what the sensei would consider too long, or too short.  Which meant that the only way out was to just try for the genuine truth; this would offer him the defense of honesty, little defense though it was.  "One year, sensei?"

"Do you think it could be done in one month, Brennan?  In a case, let us stipulate, where in principle you already have enough experimental evidence to determine an answer, but not so much experimental evidence that you can afford to make errors in interpreting it."

Again, no way to guess which answer Jeffreyssai might want... "One month seems like an unrealistically short time to me, sensei."

"A short time?" Jeffreyssai said incredulously.  "How many minutes in thirty days?  Hiriwa?"

"43200, sensei," she answered.  "If you assume sixteen-hour waking periods and daily sleep, then 28800 minutes."

"Assume, Brennan, that it takes five whole minutes to think an original thought, rather than learning it from someone else.  Does even a major scientific problem require 5760 distinct insights?"

"I confess, sensei," Brennan said slowly, "that I have never thought of it that way before... but do you tell me that is truly a realistic level of productivity?"

"No," said Jeffreyssai, "but neither is it realistic to think that a single problem requires 5760 insights.  And yes, it has been done."

This guy has done fuck all in his life other than read, and write, and think. He has never been graded by a mean professor, never been regularized by shame and inadequacy in a class of other bright kids, never stooped to empirical science or engineering or business or normal employment, never really grokked the difference between the map and the territory. He has an unrealistically, wildly inflated impression of how powerful an intelligence contorted into a Hofstadterian loop is. He has infected other geeks with it.

Recursive self-improvement doesn't work very well. Rationalists become cranks, AIs degenerate. As for better ideas, see around here. It is certain that we can improve somewhat, I think. In the limit, we will get an ASI from a closed experimental loop. That really is like creating a separate accelerated civilization.

But with ANNs, unlike Lisp scripts, it seems to require a great deal of compute, and compute doesn't just lie on the sidewalk. Yud thinks an AGI will just hack into whatever it wants, but that's a very sci-fi idea from 1990s; something he, I believe, dreamed to implement in the way already described – a singleton in the world of worthless meat sacks and classical programs. If you hack into an AWS cluster today to do your meta-learning training run, you'll suspend thousands of workloads including Midjourney pics and hentai (that people …check in real time), and send alarms off immediately. If you hack into it tomorrow, you'll get backtracked by an LLM-powered firewall.

No, I'm not too worried about an orthodox Yuddite self-improving AI.

But with ANNs, unlike Lisp scripts, it seems to require a great deal of compute, and compute doesn't just lie on the sidewalk. Yud thinks an AGI will just hack into whatever it wants, but that's a very sci-fi idea from 1990s; something he, I believe, dreamed to implement in the way already described – a singleton in the world of worthless meat sacks and classical programs. If you hack into an AWS cluster today to do your meta-learning training run, you'll suspend thousands of workloads including Midjourney pics and hentai (that people …check in real time), and send alarms off immediately. If you hack into it tomorrow, you'll get backtracked by an LLM-powered firewall.

You really can just siphon money out of the internet - people do it all the time to banks, in crypto, scams, social engineering and so on. Steal money, buy compute. Our AI could buy whatever it needs with stolen money, or it could work for its money, or its owners could buy more compute for it on the very reasonable assumption that this is the highest yielding investment in human history. We live in a service economy, bodies are not needed for a great deal of our work.

Say our AI costs 10 million dollars a day to run, (ChatGPT as a whole costs about 700K). 10 million dollars a day is peanuts in the global economy. Global cybercrime costs an enormous amount of money, 6 trillion a year. I imagine most of that cost includes the cost of fortifying websites, training people, fixing damage or whatever and only a small fraction is stolen. Even so, our AI needs only to grab 1% of that revenue and launder it to fund itself. This is not difficult. People do it all the time. And compute costs are falling, some smallish programs are being run on Macbooks as you explained earlier.

The danger is that somebody starts off with a weak superintelligence, perhaps from a closed experimental loop such as you nominate. Then it becomes a strong superintelligence rapidly by buying compute, developing architectural improvements and so on. Either it is controlled by some clique of programmers, bureaucrats or whatever (I think we both agree that this is a bad outcome) or it runs loose (also a bad outcome). The only good outcome is if progress is slow enough that power is distributed between the US, China, EU, hackers and enthusiasts and whoever else, that nobody gets a decisive strategic advantage. Recursive self-improvement in any meaningful form is catastrophic for humanity.

That really is like creating a separate accelerated civilization.

I think this means that you agree that superintelligences can recursively self-improve, that they're akin to another superintelligence? Then don't we agree?

Anyway, the authorities are extremely dopey, slow and stupid. The much vaunted US semiconductor sanctions against China meant that they simply... rented US compute to train their programs. Apparently stopping this is too hard for the all-powerful, all-knowing, invincible US government leviathan.

https://www.ft.com/content/9706c917-6440-4fa9-b588-b18fbc1503b9

“iFlytek can’t purchase the Nvidia chips, but it’s not a problem because it can rent them and train our data sets on other companies’ computer clusters,” said an executive familiar with the AI firm’s operations.

“It’s like a car rental system. You can’t take the chips out of the facility. It’s a huge building with a computer cluster, and you buy time on CPUs [central processing unit] or GPUs to train the models,” the person said.

While iFlytek cannot own the chips outright under US export controls, two employees said the rental system was a good, albeit more expensive, alternative. An engineer at iFlytek said the company “rents the chips and equipment on a long-term basis, which is effectively the same as owning them”.

iFlytek was banned from directly buying these semiconductors after Washington blacklisted it for its alleged role in providing technology for state surveillance of Uyghur Muslims in Xinjiang.

In some cases, SenseTime bought advanced chips directly through its own subsidiaries that are not on Washington’s “entity list”, according to three senior employees familiar with the situation.

SenseTime said it “strictly complies with various domestic and foreign trade-related laws and regulations” and that the group had developed a programme to ensure it “meets trade compliance standards”.