site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

After OpenAI has admitted AI safety into the mainstream, AI safetyists have naturally accepted the invitation.

The Future of Life Institute has published an open letter calling to pause «Giant AI experiments». (Archive).Their arguments are what one should expect by this point. Their prescriptions are as follows:

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Do we control our civilization? Maybe the folks at FHI do, I sure don't. Well, anyway…

Signatories (over 1000 in total) include Elon Musk, Steve Wozniak, Yuval Noah Harari, Yoshua Bengio, Connor Leahy, Stuart Russell, Andrew Yang, Emad Mostaque, Max Tegmark, Gary Marcus, Steve Omohundro, Matt Mahoney, Christof Koch, Sam Altman *, LessWrong disciples embedded in DeepMind/Meta, and various NGO/«policy» suits. Bolded are people who are reasonably well positioned and incentivized to, in fact, organize and authorize training «AI systems more powerful than GPT-4» in then next few months, though except Altman they all only barely qualify; actual GPT-5 is believed to already be in training and is, or was, planned to come out in late 2023.

Curiously absent – for now – are Yann LeCun, Jeff Dean, Demis Hassabis and John Carmack, and a few more. LeCun, at least, commits to not sign. Here's to hoping he won't find a horse's head in his sheets or something.

I do not have much of a comment at the moment. My perspective is that I despise people overly concerned with «Moloch» and want as many competitive superhuman AIs as possible, so on one hand, slowing down and enabling the state to catch up and subjugate this tech for its purposes is a very bad, yet highly expected and perhaps inevitable, outcome of this race. This attitude is born out of desperation; in principle, their «AI Summer» option, where we increase capabilities over many years, getting the equivalent of 20th century civilizational shift in a decade instead of an explosive singularity, is not bad at all; I just don't believe in it.

On the other: seeing as nobody is closer to GPT-5 than OpenAI themselves (excepting DeepMind with Gato-2 or something better, as Gwern worries), it could be beneficial for our long-term outcomes to equalize the board somewhat, giving China more of a chance too. Geopolitics dictates that this should preclude the possibility of this policy being pursued in earnest, but really China is so colossally outmatched in AI, so well and truly fucked by technological restrictions, and mired in such problems and gratuitous stupidity of its own policymakers, it may not be a factor in either case.

I must go, so that's all from me; hopefully this is enough to pass the «effort» bar required by the mods and prompt some discussion.


In happier news, arguably the most powerful opensource chatbot today is LLaMA-7B with a transfusion of ChatGPT 3.5-Turbo quirks, (not very) creatively called GPT4all. It's far beyond basic Alpaca (already an attempt to extract OpenAI's magic) and absurdly good for what it is, a 4.21 Gb file of lossily compressed 7 billion weights trained… well, the way it's been trained, the AI equivalent of a movie camrip superimposed on the general web dump; the worst part of it is that it genuinely apes ChatGPT's politics and RLHF-d sanctimonious «personality» despite being 25 times smaller and probably 10 times dumber. It runs happily on very modest computers, and – unlike Alpaca – not only responds to instructions but maintains awareness of earlier parts in the dialogue (though it's sometimes overeager to say your part as well). I know that models vastly stronger than that should also be usable on commodity hardware and must be made available to commoners, but we may see regulation making it not so, and very quickly.

Consider the attached image representative of its mindset.

* (EDIT: I believe I found him there with ctrlF when first opened the page, but he's not present in any extant version; guess it was a hallucination. I really need to sleep, these slip-ups are worrying).

/images/16800616737543523.webp

Something to be aware of is that as much as we are in an AI hype cycle for new products, we're also in an AI-safety charlatanism hype cycle. In exploring organizations like the Future of Life Institute, I've come across links and associations to non-profits and other advocacy groups, at least in the U.S., that have legislative goals in regards to AI. Most of the goals sound plausibly good; "let's prevent the creation of the paperclip machine that destroys earth." Some are obvious DEI backdoors (the key term to look for here is "algorithmic bias"). Even that's fine as it's just existing culture war conflict.

What bothers me is the number of people in these organizations that have absolutely zero technical background or capability. I mean literally none - they've never even dabbled with some basic python libraries that make training and running a model a 3 line endeavor. They never took anything beyond single variable calculus. They have zero stat and probability background and so fall for BASIC statistical fallacies over and over again. Even beyond the hard math and comp sci subjects, a lot haven't taken the time to investigate what I think are the two philosophical domains most important to curret LMMs and "baby" AIs; epistemology and linguistics.

Previously, I would chuckle and think, "sure! make your policy recommendations. You have no idea what you're talking about anyway." What I'm seeing now, however, is a lot of a desire to build a bureaucracy of "experts" to endorse politically pre-approved measures. This is the CDC during COVID.

I don't want to send the message that only hardcore ML engineers can have opinions on AI. Most of my career has been spent working with engineers, so I know that they are no less victim to poor / motivated / emotional reasoning than any other demographic and they often have trouble explaining the concepts they are demonstrated experts in.

The only solid cautionary advice I think I can offer in this case is to be suspicious of any group which presents a too-pure benevolent mission (i.e. "our mission is to make the world a better place for everyone always and forever), is alarmist in their doom-saying ("we have to act NOW!"), yet recommends unspecific remedies ("we need to develop mechanisms of ensuring collaboration and the integration of multiple viewpoints ... for safety ... and, like, follow the science).

a lot haven't taken the time to investigate what I think are the two philosophical domains most important to curret LMMs and "baby" AIs; epistemology and linguistics.

How many fancy linguistic theories have been thrown into the dustbin of history by brute-forcing a neural network on digital libraries? Look how linguists like Noam Chomsky and Emily M Bender cover their ears and squeal in pure terror as their life’s work is falsified before their very eyes.

Can someone spell out how this falsification works? Do we actually understand how LLMs parse things? Or if you don't think they parse, then does anyone know what the hell they do instead?

As far as I know, the argument goes something like, attention mechanism, context matters, yada yada. Which doesn't really cut it.

Falsifying a theory does not, in fact, require proposing an alternative theory – only showing how the theory's predictions do not come to pass, and predictions of generative linguists have absolutely failed.

Though on a broad level I'd say that, yes, we do know how LLMs "parse things", which is why we can build them. They are a successful and very informative application of a specific theory about language acquisition (and more generally statistical learning, see Chomsky vs Norvig debate, the Bitter Lesson, etc.)

How is the specification of the attention mechanism, informed by decades of research in NLP, less a proof of understanding than Chomskyite's purely speculative hot takes like merge? It's not like we randomly sampled through the space of architectures until something clicked and a shoggoth was summoned (even if Yud believes this is how it goes). This progress in machine learning is research and gaining understanding in the classical scientific manner, even if it's often looked down upon, whereas the sort of "understanding" and "interpretability" that linguists and safetyists require is Talmudic verbal magic, conveniently compact and "elegant" by the standards of a comic book, where every symbol of the incantation can be resolved into human-parseable logic in each activation. Sorry, Grothendieck didn't get it and neither will we. That's okay. That's how science works.

Consider the article linked here

https://www.themotte.org/post/421/culture-war-roundup-for-the-week/79642?context=8#context

Thanks.

I'd like to say I'm going to read and absorb your links, but we'll see if I get time.