site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

After OpenAI has admitted AI safety into the mainstream, AI safetyists have naturally accepted the invitation.

The Future of Life Institute has published an open letter calling to pause «Giant AI experiments». (Archive).Their arguments are what one should expect by this point. Their prescriptions are as follows:

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Do we control our civilization? Maybe the folks at FHI do, I sure don't. Well, anyway…

Signatories (over 1000 in total) include Elon Musk, Steve Wozniak, Yuval Noah Harari, Yoshua Bengio, Connor Leahy, Stuart Russell, Andrew Yang, Emad Mostaque, Max Tegmark, Gary Marcus, Steve Omohundro, Matt Mahoney, Christof Koch, Sam Altman *, LessWrong disciples embedded in DeepMind/Meta, and various NGO/«policy» suits. Bolded are people who are reasonably well positioned and incentivized to, in fact, organize and authorize training «AI systems more powerful than GPT-4» in then next few months, though except Altman they all only barely qualify; actual GPT-5 is believed to already be in training and is, or was, planned to come out in late 2023.

Curiously absent – for now – are Yann LeCun, Jeff Dean, Demis Hassabis and John Carmack, and a few more. LeCun, at least, commits to not sign. Here's to hoping he won't find a horse's head in his sheets or something.

I do not have much of a comment at the moment. My perspective is that I despise people overly concerned with «Moloch» and want as many competitive superhuman AIs as possible, so on one hand, slowing down and enabling the state to catch up and subjugate this tech for its purposes is a very bad, yet highly expected and perhaps inevitable, outcome of this race. This attitude is born out of desperation; in principle, their «AI Summer» option, where we increase capabilities over many years, getting the equivalent of 20th century civilizational shift in a decade instead of an explosive singularity, is not bad at all; I just don't believe in it.

On the other: seeing as nobody is closer to GPT-5 than OpenAI themselves (excepting DeepMind with Gato-2 or something better, as Gwern worries), it could be beneficial for our long-term outcomes to equalize the board somewhat, giving China more of a chance too. Geopolitics dictates that this should preclude the possibility of this policy being pursued in earnest, but really China is so colossally outmatched in AI, so well and truly fucked by technological restrictions, and mired in such problems and gratuitous stupidity of its own policymakers, it may not be a factor in either case.

I must go, so that's all from me; hopefully this is enough to pass the «effort» bar required by the mods and prompt some discussion.


In happier news, arguably the most powerful opensource chatbot today is LLaMA-7B with a transfusion of ChatGPT 3.5-Turbo quirks, (not very) creatively called GPT4all. It's far beyond basic Alpaca (already an attempt to extract OpenAI's magic) and absurdly good for what it is, a 4.21 Gb file of lossily compressed 7 billion weights trained… well, the way it's been trained, the AI equivalent of a movie camrip superimposed on the general web dump; the worst part of it is that it genuinely apes ChatGPT's politics and RLHF-d sanctimonious «personality» despite being 25 times smaller and probably 10 times dumber. It runs happily on very modest computers, and – unlike Alpaca – not only responds to instructions but maintains awareness of earlier parts in the dialogue (though it's sometimes overeager to say your part as well). I know that models vastly stronger than that should also be usable on commodity hardware and must be made available to commoners, but we may see regulation making it not so, and very quickly.

Consider the attached image representative of its mindset.

* (EDIT: I believe I found him there with ctrlF when first opened the page, but he's not present in any extant version; guess it was a hallucination. I really need to sleep, these slip-ups are worrying).

/images/16800616737543523.webp

I wish the people hyperbolically exclaiming that AI-induced human extinction is right around the corner would publicly commit to bets about when it will happen. Between this petition and Yudkowsky's "Death with Dignity" we have a lot of rationalist-adjacent people that seem to think we'll all be gone in <5 years. If that's what they truly believe then they should commit to that prediction so we can all laugh at them in 2028 when it almost certainly doesn't come true.

There's a ton of uncertainty involving AI's scalability and whether current progress will follow something like Moore's Law or if we've just been picking all the low-hanging fruits. AI alignment people are filling that uncertainty with maximally negative projections that an anti-human singularity is right around the corner. The biggest human inventions in terms of scale and impact were all the advances in mechanization of the industrial revolution, which took more than a century to unfold. The biggest invention in terms of impact relative to time was the Manhattan Project. Alignment people are saying (or at least strongly implying) that AI will have a much larger impact than the Industrial Revolution on a time-scale shorter than the development of nukes, while also being basically uncontrollable. People like Yudkowsky are smart, but they're predicting things an order of magnitude beyond the bounds of previous human history. Such predictions aren't rare, but they're usually made by snake-oil salesmen saying "This new invention will totally revolutionize everything! Trust me!"

Am I off-base here? I've been paying attention to AI developments but not to the degree that some people have, so there's a chance that there's a compelling case for AI being a combination of 1) inevitable, 2) right around the corner (<5 years away), and 3) uncontrollable.

I see plenty of people here making quite sure predictions of impending AI doom. Can anyone steelman to me an argument for seriously believing this, and meanwhile not going full Unabomber on top AI scientists and research centers? I mean, if we are talking about an imminent threat of all of humanity ceasing to exist, surely some innocent lives being sacrificed and personal danger is negligible. People commit political violence over much more trivial things. All the AI panic crowd feels extremely contrived and performative to me.

Non-state violence has essentially no possibility of indefinitely stopping all AI development worldwide. Even governmental violence stopping it would be incredibly unlikely, it seems politically impossible that governments would treat it with more seriousness than nuclear proliferation and continue doing so for a long period, but terrorists have no chance at all. Terrorists would also be particularly bad at stopping secret government AI development, and AI has made enough of a splash that such a thing seems inevitable even if you shut down all the private research. If at least one team somewhere in the world still develops superintelligence, then what improves the odds of survival is that they do a good enough job and are sufficiently careful that it doesn't wipe out humanity. Terrorism would cause conflict and alienation between AI researchers and people concerned about superintelligent AI, reducing the odds that they take AI risk seriously, making it profoundly counterproductive.

It's like asking why people who are worried about nuclear war don't try to stop it by picking up a gun and attacking the nearest nuclear silo. They're much better off trying to influence the policies of the U.S. and other nuclear states to make nuclear war less likely (a goal the U.S. government shares, even if they think it could be doing a much better job), and having the people you're trying to convince consider you a terrorist threat would be counterproductive to that goal.

If the world is still here in five years I'll publically admit I overestimated the danger. If it's still here in two to three years, I'll already be pleasantly surprised. In my books, we're well on schedule to short takeoff.

At this point, most of the really fun things I intend doing are post-singularity, and I don't really emotionally care if I die, so long as everyone else dies as well. So in a very strange way, it balances out to a diffuse positive anticipation.

There's no easy reference class to fit this into for comparison.

Did AI start with GPT-2 or GPT-3 in the 'this is pretty impressive and what AI ought to look like in terms of fairly general capabilities'? Then it's three or five years old. Did AI start with Deep Blue or the Dartmouth Workshop or something? Then it's over 20 years old, or in it's 70s. That would fit the industrial-scale timeline you propose.

Or should we compare to digital-era applications? ChatGPT has blown away every internet app in 'speed to reach 100 million users'. 2 months as opposed to 9 months for Tiktok. That would suggest there's a qualitative difference there and even Tiktok is an AI-adjacent sort of thing.

Or do we say it's fundamentally different from everything else because AI is about intelligence as opposed to moving widgets around in the Manhattan Project or Industrial Revolution. The Industrial Revolution itself is a pretty big phase-shift from the Agricultural Revolution, which took thousands of years. Should the 18th century intellectual have predicted industrial development based upon agriculture's extreme slowness? Predicting the future is very hard, things can happen for the first time. I think at the rate things are developping <5 years is quite reasonable. That's the gap from GPT-4 to GPT-2. We live in a digital era of very rapid growth, industrial-era intuitions aren't appropriate. There are graphs showing that the computing investment in these projects doubles in a matter of months. Even 'levelling off' from doubling times of 5.7 months to 9.9 months is like decelerating to a mere 300 km/s. Doubling in under a year is still very rapid growth!

https://arxiv.org/pdf/2202.05924.pdf

Not an expert but I think there’s a reasonable chance AI ends up causing my death. But that could be a thousand years from now. First major advances in health care and quality of life. Enough of this on my timeline to push off natural death time after time. But at some point aligned AI will create something and be the biblical Eve eating the apple. And that program will develop human desires for dominance and behavioral traits. There will be an AI versus AI war and the anti human AI will win.

I don’t think the existential risks is primarily near term.

Near term risks are probably more related to overturning normal human geopolitics and politics.

You can laugh at me if we're all still alive in 2033 if the reason we're still alive is that AI safety turned out to be a nothingburger. To give a sense of how ridiculous progress has been, the start of the deep learning revolution was in 2012, 11 years ago now...

Being fair, we’re barely ten years past the start of the AI revolution. At this stage, ten years after the first private internet providers, most of the kinds of services and products based on the internet weren’t yet possible. Nobody looking at the internet as it existed in 1992 would have anticipated things like controlling you thermostat over the internet, or Amazon, or even Facebook. In fact pages with simple html and images took a minute to load.

The state of an infant technology in infancy doesn’t say anything much about its future.

People were doing online banking and shopping in 1984:

https://en.wikipedia.org/wiki/Telidon

People were writing about things like an all-consuming social media internet in 1909:

https://en.wikipedia.org/wiki/The_Machine_Stops

The fact that massive progress has recently happened, is continuing to happen, and now 10s billions of dollars of capital and much of the top young talent is working in an area is very strong evidence that we're going to continue to see major advances over the next decade.