This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
After OpenAI has admitted AI safety into the mainstream, AI safetyists have naturally accepted the invitation.
The Future of Life Institute has published an open letter calling to pause «Giant AI experiments». (Archive).Their arguments are what one should expect by this point. Their prescriptions are as follows:
Do we control our civilization? Maybe the folks at FHI do, I sure don't. Well, anyway…
Signatories (over 1000 in total) include Elon Musk, Steve Wozniak, Yuval Noah Harari, Yoshua Bengio, Connor Leahy, Stuart Russell, Andrew Yang, Emad Mostaque, Max Tegmark, Gary Marcus, Steve Omohundro, Matt Mahoney, Christof Koch,
Sam Altman*, LessWrong disciples embedded in DeepMind/Meta, and various NGO/«policy» suits. Bolded are people who are reasonably well positioned and incentivized to, in fact, organize and authorize training «AI systems more powerful than GPT-4» in then next few months, though except Altman they all only barely qualify; actual GPT-5 is believed to already be in training and is, or was, planned to come out in late 2023.Curiously absent – for now – are Yann LeCun, Jeff Dean, Demis Hassabis and John Carmack, and a few more. LeCun, at least, commits to not sign. Here's to hoping he won't find a horse's head in his sheets or something.
I do not have much of a comment at the moment. My perspective is that I despise people overly concerned with «Moloch» and want as many competitive superhuman AIs as possible, so on one hand, slowing down and enabling the state to catch up and subjugate this tech for its purposes is a very bad, yet highly expected and perhaps inevitable, outcome of this race. This attitude is born out of desperation; in principle, their «AI Summer» option, where we increase capabilities over many years, getting the equivalent of 20th century civilizational shift in a decade instead of an explosive singularity, is not bad at all; I just don't believe in it.
On the other: seeing as nobody is closer to GPT-5 than OpenAI themselves (excepting DeepMind with Gato-2 or something better, as Gwern worries), it could be beneficial for our long-term outcomes to equalize the board somewhat, giving China more of a chance too. Geopolitics dictates that this should preclude the possibility of this policy being pursued in earnest, but really China is so colossally outmatched in AI, so well and truly fucked by technological restrictions, and mired in such problems and gratuitous stupidity of its own policymakers, it may not be a factor in either case.
I must go, so that's all from me; hopefully this is enough to pass the «effort» bar required by the mods and prompt some discussion.
In happier news, arguably the most powerful opensource chatbot today is LLaMA-7B with a transfusion of ChatGPT 3.5-Turbo quirks, (not very) creatively called GPT4all. It's far beyond basic Alpaca (already an attempt to extract OpenAI's magic) and absurdly good for what it is, a 4.21 Gb file of lossily compressed 7 billion weights trained… well, the way it's been trained, the AI equivalent of a movie camrip superimposed on the general web dump; the worst part of it is that it genuinely apes ChatGPT's politics and RLHF-d sanctimonious «personality» despite being 25 times smaller and probably 10 times dumber. It runs happily on very modest computers, and – unlike Alpaca – not only responds to instructions but maintains awareness of earlier parts in the dialogue (though it's sometimes overeager to say your part as well). I know that models vastly stronger than that should also be usable on commodity hardware and must be made available to commoners, but we may see regulation making it not so, and very quickly.
Consider the attached image representative of its mindset.
* (EDIT: I believe I found him there with ctrlF when first opened the page, but he's not present in any extant version; guess it was a hallucination. I really need to sleep, these slip-ups are worrying).
/images/16800616737543523.webp
Is there anyone here who 1) thinks that AI x-risk is a threat that should be taken seriously, and 2) also thinks that this letter is a bad idea? If so, can you explain your reasoning? And also explain what restrictions on AI development you would support?
For a group of people who are allegedly very concerned with the possibility that AI will soon wipe out humanity, Rationalists are suspiciously resistant to any proposals for actually slowing and regulating AI development. A lot of the comments on this letter on LW and /r/ssc are very critical. If your stance is "I wish we could slow AI development and I support the letter in spirit, but I think it's unlikely to work", then that's one thing. But the critical comments seem to suggest that the comment authors either don't support any AI regulation at all, or else they're engaged in motivated reasoning to try to convince themselves that it's not even worth trying (e.g. "this letter will have a net negative impact due to its effect on capabilities researchers who don't like it" - the good ol' "if you fight your enemies, they win" tactic for concern trolling).
It lends support to my intuition that most AI x-riskers don't actually take the idea of x-risk very seriously, and on a gut level they think the benefits of AI are so likely to outweigh the downsides that there's no issue with pushing full steam ahead on capabilities research.
(Obviously if you're a full on utopian optimist and you consciously affirm that x-risk is not a serious threat, then there is no contradiction in your position and none of this applies to you.)
1. Yeah, but not in the way Rats tend to think about it. Their way of thinking tends to be very alien to me.
2. Primarily I am a Butlerian Jihadist, and would like to get rid of the whole thing. Secondarily I lean towards DaseIndustrialism (your question might stem from being unfamiliar with his worldview), the regulations I would support would be a ban on closed source / closed data AI, and subsidies towards open source initiatives.
I've never understood this view - do you really think the Internet and all the technology we've created post 1990 is net-negative?
If so, why? Or is it just based on a concern for AGI?
I'm ok with the Internet, but things started going downhill with the introduction of convenient "one stop shopping" platforms operating through opaque recommendation algorithms.
Also, seeing the impact of the internet on generations that haven't seen a world without it, would make me utterly unsurprised at it turning out to be a net negative.
I don't know if I even believe in AGI, my issue is with current AI technology. It has the potential to dumb us down, and give the establishment the tools to shape our ideas and discourse like nothing before it.
Hopefully we’re just in a local minima but I agree with you in terms of monopolies in tech. I’m optimistic that LLM progress and medium term space travel will alleviate some of the monopolistic tendencies in our current economic system.
I don't think "we have planetary monopolies on media, but we colonized several planets" helps. Everybody torrenting the ever loving fuck out of media (or using IRCinstead of WhatsApp) looks like something that might plausibly alleviate the technologies negative social consequences.
With LLMs I'm more pessimistic. The issues might be fundamental to the technology itself, I don't see how it's a good idea to outsource your reasoning skills to a machine. Imagine having an exoskeleton so small you could wear it as a second skin, which gives you superhuman strength. Sounds great, but your actual body is going to atrophy.
But the only thing worse than having a machine that atrophies your brain, is having a machine that atrophies your brain and is under the control of hostile actors, so maximum proliferation is the next best thing after a total ban.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link