This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
After OpenAI has admitted AI safety into the mainstream, AI safetyists have naturally accepted the invitation.
The Future of Life Institute has published an open letter calling to pause «Giant AI experiments». (Archive).Their arguments are what one should expect by this point. Their prescriptions are as follows:
Do we control our civilization? Maybe the folks at FHI do, I sure don't. Well, anyway…
Signatories (over 1000 in total) include Elon Musk, Steve Wozniak, Yuval Noah Harari, Yoshua Bengio, Connor Leahy, Stuart Russell, Andrew Yang, Emad Mostaque, Max Tegmark, Gary Marcus, Steve Omohundro, Matt Mahoney, Christof Koch,
Sam Altman*, LessWrong disciples embedded in DeepMind/Meta, and various NGO/«policy» suits. Bolded are people who are reasonably well positioned and incentivized to, in fact, organize and authorize training «AI systems more powerful than GPT-4» in then next few months, though except Altman they all only barely qualify; actual GPT-5 is believed to already be in training and is, or was, planned to come out in late 2023.Curiously absent – for now – are Yann LeCun, Jeff Dean, Demis Hassabis and John Carmack, and a few more. LeCun, at least, commits to not sign. Here's to hoping he won't find a horse's head in his sheets or something.
I do not have much of a comment at the moment. My perspective is that I despise people overly concerned with «Moloch» and want as many competitive superhuman AIs as possible, so on one hand, slowing down and enabling the state to catch up and subjugate this tech for its purposes is a very bad, yet highly expected and perhaps inevitable, outcome of this race. This attitude is born out of desperation; in principle, their «AI Summer» option, where we increase capabilities over many years, getting the equivalent of 20th century civilizational shift in a decade instead of an explosive singularity, is not bad at all; I just don't believe in it.
On the other: seeing as nobody is closer to GPT-5 than OpenAI themselves (excepting DeepMind with Gato-2 or something better, as Gwern worries), it could be beneficial for our long-term outcomes to equalize the board somewhat, giving China more of a chance too. Geopolitics dictates that this should preclude the possibility of this policy being pursued in earnest, but really China is so colossally outmatched in AI, so well and truly fucked by technological restrictions, and mired in such problems and gratuitous stupidity of its own policymakers, it may not be a factor in either case.
I must go, so that's all from me; hopefully this is enough to pass the «effort» bar required by the mods and prompt some discussion.
In happier news, arguably the most powerful opensource chatbot today is LLaMA-7B with a transfusion of ChatGPT 3.5-Turbo quirks, (not very) creatively called GPT4all. It's far beyond basic Alpaca (already an attempt to extract OpenAI's magic) and absurdly good for what it is, a 4.21 Gb file of lossily compressed 7 billion weights trained… well, the way it's been trained, the AI equivalent of a movie camrip superimposed on the general web dump; the worst part of it is that it genuinely apes ChatGPT's politics and RLHF-d sanctimonious «personality» despite being 25 times smaller and probably 10 times dumber. It runs happily on very modest computers, and – unlike Alpaca – not only responds to instructions but maintains awareness of earlier parts in the dialogue (though it's sometimes overeager to say your part as well). I know that models vastly stronger than that should also be usable on commodity hardware and must be made available to commoners, but we may see regulation making it not so, and very quickly.
Consider the attached image representative of its mindset.
* (EDIT: I believe I found him there with ctrlF when first opened the page, but he's not present in any extant version; guess it was a hallucination. I really need to sleep, these slip-ups are worrying).
/images/16800616737543523.webp
Is there anyone here who 1) thinks that AI x-risk is a threat that should be taken seriously, and 2) also thinks that this letter is a bad idea? If so, can you explain your reasoning? And also explain what restrictions on AI development you would support?
For a group of people who are allegedly very concerned with the possibility that AI will soon wipe out humanity, Rationalists are suspiciously resistant to any proposals for actually slowing and regulating AI development. A lot of the comments on this letter on LW and /r/ssc are very critical. If your stance is "I wish we could slow AI development and I support the letter in spirit, but I think it's unlikely to work", then that's one thing. But the critical comments seem to suggest that the comment authors either don't support any AI regulation at all, or else they're engaged in motivated reasoning to try to convince themselves that it's not even worth trying (e.g. "this letter will have a net negative impact due to its effect on capabilities researchers who don't like it" - the good ol' "if you fight your enemies, they win" tactic for concern trolling).
It lends support to my intuition that most AI x-riskers don't actually take the idea of x-risk very seriously, and on a gut level they think the benefits of AI are so likely to outweigh the downsides that there's no issue with pushing full steam ahead on capabilities research.
(Obviously if you're a full on utopian optimist and you consciously affirm that x-risk is not a serious threat, then there is no contradiction in your position and none of this applies to you.)
What would be accomplished during a "six-month-pause" that would make it worth the enormous difficulty of getting that sort of international cooperation, even if the petition had any chance of success at all? Why should people concerned about unaligned AI consider this the best thing to spend their credibility and effort on? It's not like "alignment research" is some separate thing with a clear path forward, where if only we pause the AI training runs we'll have the time for a supercomputer to finish computing the Alignment Solution. Alignment researchers are stumbling around in the dark trying to think of ideas that will eventually help the AI developers when they hit superintelligence. Far more important to make sure that the first people to create a superintelligence consider "the superintelligence exterminates humanity" a real threat and try to guide their work accordingly, which if anything this interferes with by weakening the alignment-concerned faction within AI research. (The petition also talks about irrelevant and controversial nonsense like misinformation and automation, the last thing we want is alignment to be bureaucratized into a checklist of requirements for primitive AI while sidelining the real concern, or politicized into a euphemism for left-wing censorship.) Right now the leading AI research organization is run by people who started off trying to help AI alignment, that seems a lot better than the alternative! To quote Microsoft's "Sparks of Artificial General Intelligence: Early experiments with GPT-4" paper:
Here is the baseline: if the first people to create superintelligence aren't concerned with alignment, there's a decent chance they will deliberately give it "agency and intrinsic motivations". (Not that I'm saying the Microsoft researchers necessarily would, maybe they only said that because LLMs are so far from superintelligence, but it isn't a promising sign.) Personally I'm inclined to believe that there's no reason a superintelligent AI needs to have goals, which would make "create a Tool AI and then ask it to suggest solutions to alignment" the most promising alignment method. But even if you think otherwise, surely the difference between having superintelligence developed by researchers who take alignment seriously and researchers who think "lets try giving the prospective superintelligence intrinsic motivations and write a paper about what happens!" matters a lot more than whatever "alignment researchers" are going to come up with in 6 months.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link