site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

After OpenAI has admitted AI safety into the mainstream, AI safetyists have naturally accepted the invitation.

The Future of Life Institute has published an open letter calling to pause «Giant AI experiments». (Archive).Their arguments are what one should expect by this point. Their prescriptions are as follows:

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Do we control our civilization? Maybe the folks at FHI do, I sure don't. Well, anyway…

Signatories (over 1000 in total) include Elon Musk, Steve Wozniak, Yuval Noah Harari, Yoshua Bengio, Connor Leahy, Stuart Russell, Andrew Yang, Emad Mostaque, Max Tegmark, Gary Marcus, Steve Omohundro, Matt Mahoney, Christof Koch, Sam Altman *, LessWrong disciples embedded in DeepMind/Meta, and various NGO/«policy» suits. Bolded are people who are reasonably well positioned and incentivized to, in fact, organize and authorize training «AI systems more powerful than GPT-4» in then next few months, though except Altman they all only barely qualify; actual GPT-5 is believed to already be in training and is, or was, planned to come out in late 2023.

Curiously absent – for now – are Yann LeCun, Jeff Dean, Demis Hassabis and John Carmack, and a few more. LeCun, at least, commits to not sign. Here's to hoping he won't find a horse's head in his sheets or something.

I do not have much of a comment at the moment. My perspective is that I despise people overly concerned with «Moloch» and want as many competitive superhuman AIs as possible, so on one hand, slowing down and enabling the state to catch up and subjugate this tech for its purposes is a very bad, yet highly expected and perhaps inevitable, outcome of this race. This attitude is born out of desperation; in principle, their «AI Summer» option, where we increase capabilities over many years, getting the equivalent of 20th century civilizational shift in a decade instead of an explosive singularity, is not bad at all; I just don't believe in it.

On the other: seeing as nobody is closer to GPT-5 than OpenAI themselves (excepting DeepMind with Gato-2 or something better, as Gwern worries), it could be beneficial for our long-term outcomes to equalize the board somewhat, giving China more of a chance too. Geopolitics dictates that this should preclude the possibility of this policy being pursued in earnest, but really China is so colossally outmatched in AI, so well and truly fucked by technological restrictions, and mired in such problems and gratuitous stupidity of its own policymakers, it may not be a factor in either case.

I must go, so that's all from me; hopefully this is enough to pass the «effort» bar required by the mods and prompt some discussion.


In happier news, arguably the most powerful opensource chatbot today is LLaMA-7B with a transfusion of ChatGPT 3.5-Turbo quirks, (not very) creatively called GPT4all. It's far beyond basic Alpaca (already an attempt to extract OpenAI's magic) and absurdly good for what it is, a 4.21 Gb file of lossily compressed 7 billion weights trained… well, the way it's been trained, the AI equivalent of a movie camrip superimposed on the general web dump; the worst part of it is that it genuinely apes ChatGPT's politics and RLHF-d sanctimonious «personality» despite being 25 times smaller and probably 10 times dumber. It runs happily on very modest computers, and – unlike Alpaca – not only responds to instructions but maintains awareness of earlier parts in the dialogue (though it's sometimes overeager to say your part as well). I know that models vastly stronger than that should also be usable on commodity hardware and must be made available to commoners, but we may see regulation making it not so, and very quickly.

Consider the attached image representative of its mindset.

* (EDIT: I believe I found him there with ctrlF when first opened the page, but he's not present in any extant version; guess it was a hallucination. I really need to sleep, these slip-ups are worrying).

/images/16800616737543523.webp

It’s darkly funny to watch these groups retread the same chain of arguments that Lesswrong hashed out over the last decade. IIRC the follow up is supposed to be “but then you’re just handing it to bad actors!” Do you think that’ll actually make it into the mainstream?

More importantly, how can I bet on my mysterious future knowledge?

Oh yeah, ‘but what about China’ is already making the rounds: https://twitter.com/scobleizer/status/1641003564876087297?s=46&t=eRxnX0CnaM1TjjUItEyBeQ

Well, yes, what about China? I'm not reading Lesswrong that much, but at least that Twitter link doesn't have any interesting objections. If it's so dangerous, if it's so powerful, then someone else will do that, China in particular has a lot of data. Strongest counterpoint I can come up with is that they are incompetent, silly commies and they will fail, but that sounds like wishful thinking.

I can only treat the cry for halting AI research seriously not as an actual proposal, but as a way to bring attention to the issue.

Microsoft has been single-handedly saving my portfolio lately. The market is still not pricing in “OpenAI becomes the most powerful organization on the planet”.

My favorite part of this timeline has been how OpenAI went from a non-profit foundation meant to democratize AI research into a for-profit subsidiary of the most ruthless monopoly of the last 30 years, with zero regard for AI safety if there is any chance to sniff a profit.

Yeah Sam Altman is a machiavellian genius deal-maker, apparently. Or he was in the right place at the right time.

Either way, he absolutely scooped those nerds at Google trying to just build tech for humanity and do research.

Very depressing how Google (the leader up until 2021 and maybe even still) chose not to release products, presumably for ethical reasons, and then Microsoft just bought OpenAI, gave GPT-4 access to the internet and said "let her rip".

It's a bad sign. It makes it more likely that people will adopt the attitude that there's no point in containing AI since if I don't do it, someone else will.

It makes it more likely that people will adopt the attitude that there's no point in containing AI since if I don't do it, someone else will.

but...that is obviously true. And was always obviously true. And you're saying it's bad if more people come to this true conclusion?

It's kinda hilarious, but a) there actually was a lot of conversation about this as a possible solution in LessWrong spheres in the late Obama era, and b) there's still people today, here making the "just don't do that" argument.

I don't think it should be surprising anymore, given the failed efforts post-COVID to provide serious oversight of even our own country's gain-of-function research, but I found that pretty disappointing.

Yes. It's bad. Microsoft's irresponsibility has pushed the timeline forward.

Thus all the memes of:

2015: "Obvious no one will hook AGI up to the internet"

2023: Leeroy Jenkins!!!

Perhaps it was inevitable. Perhaps it is even desirable according to some convoluted reasoning about hardware overhangs or whatever. For me, I'd prefer to live more years in the human era.

Just because defection seems inevitable doesn't absolve the first person to defect from responsibility.

Defection is inevitable when arguments for AI safety are bad and fail to convince most intelligent agents in the space.

We could agree on non proliferation of nuclear weapons because the harms are obvious and the benefits are essentially just more power. AI is a whole different ballgame.

Did Google not release anything for ethical reasons, or was it because they were a bunch of naval-gazers with the economic discipline of a DMV office?

I've seen claims that shipping everything was becoming impossible due to too much paper-pushing and people with veto power, so..

They didn’t want to release AI because it couldn’t help noticing inconvenient patterns in society

Well if they were determined enough to ship something they could have figured out a way to beat it over the head until it stops noticing (like OpenAI did). That’s part of the “economic discipline” thing.

Given Google’s track record with finished products, there isn’t much doubt what the reason was.