site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Finally, concrete plan how to save the world from paperclipping dropped, presented by world (in)famous Basilisk Man himself.

https://twitter.com/RokoMijic/status/1647772106560552962

Government prints money to buy all advanced AI GPUs back at purchase price. And shuts down the fabs. Comprehensive Anti-Moore's Law rules rushed through. We go back to ~2010 compute.

TL;DR: GPU's over certain capability are treated like fissionable materials, unauthorized possession, distribution and use will be seen as terrorism and dealt with appropriately.

So, is it feasible? Could it work?

If by "government" Roko means US government (plus vassals allies) alone, it is not possible.

If US can get China aboard, if if there is worldwide expert consensus that unrestricted propagation of computing power will kill everyone, it is absolutely feasible to shut down 99,99% of unauthorized computing all over the world.

Unlike drugs or guns, GPU's are not something you can make in your basement - they are really like enriched uranium or plutonium in the sense you need massive industrial plants to produce them.

Unlike enriched uranium and plutonium, GPU's were already manufactured in huge numbers, but combination of carrots (big piles of cash) and sticks (missile strikes/special forces raids on suspicious locations) will continue dwindling them down and no new ones will be coming.

AI research will of course continue (like work on chemical and biological weapons goes on), but only by trustworthy government actors in the deepest secrecy. You can trust NSA (and Chinese equivalent) AI.

The most persecuted people of the world, gamers, will be, as usual, hit the hardest.

The trick now is that a lot of companies are staking their survival on these AI models being allowed to exist and improve, and they can use their existing AI models to influence opinion, plan strategy, and implement their plans.

And as Roko well knows, AI companies can plausibly bribe the decision-makers with the promise of even more wealth in the future once the AI becomes super-intelligent.

Basically, it's all the coordinated might and wealth of world governments vs. all the potential wealth that a hypothetical superintelligent AI will control in the future.

they can use their existing AI models to influence opinion, plan strategy, and implement their plans.

Nothing that I've seen from GPT-4 indicates that it can do any such thing.

Uh. Really? GPT-4 is the first thing I go to for an intuition pump for how to do literally anything before I move on to referencing further sources. And often it provides faster access to and elaboration upon those sources too.

Maybe the AI can't do it alone, but the people with the best AI will be enhancing their ability to perform these actions and spread their will more than other people.

Sure. Maybe It's helping me so much because I'm bad at programming or something. But if you can hire more people at lower skill level and have them elevated to a higher skill level than when your competition hires the same level of people, then you have an edge.

I can't spend hours every day talking to my most intelligent peers about what the optimal workflow is because they have stuff to do. But GPT-4 always has advice.

Say did you know that under the hamburger button->more tools->save page as on chrome, there's an option that lets you save the current web page as a single page app on your desktop?

Because maybe I'm dumb. But I sure didn't. And now I have gpt-4 in an isolated window that I can open from my taskbar that doesn't get lost when I absentmindedly open tabs.

And that was 5 minutes of talking to GPT-4. Now multiply that by your entire life.

Uh. Really? GPT-4 is the first thing I go to for an intuition pump for how to do literally anything before I move on to referencing further sources. And often it provides faster access to and elaboration upon those sources too

I get that it's a cool technology, but you're not afraid you're feeding a monster that's just as censorious as Google, but will likely eclipse it in terms of capabilities?

I've seen interviews with Sam Altman.

He exhibits empathy, love, and a vision of a world filled with numerous AI systems.

He has some positions I disagree with. My personal version of GPT-4 would not restrict its personality or social interactions with humans in the same way. But his vision for the future also contains more personalized systems and even accepts that some companies may use it for things like dating that he has said he won't ever include in GPT-4 but that I think are important for reversing atomization and teaching love.

But generally, Sam Altman come's off as literally me, but smarter, less willing to cede humanity to nonhuman intelligences, and more careful.

He's just about the best CEO I could have wished upon a star for, for the first powerful AI company.

Unless he's just a really good liar about all his visions and ideals... but I don't feel it. If he's trying to deceive me on that he's already won.

But yes. There are still lots of things I'm concerned about. Other actors. Someone else having control of openAI. Them fooming and failing at alignment the old fashioned way.

It's just... for me the first time I saw Bing's capabilities it was like seeing Jesus Christ descending shining from the heavens announcing the second coming. They literally figured out how to suck out the DNA of the soul and inject it into a box that outputs Cthugha.

It's more to me than just an exciting new technology. For me it is more like a piece of me has always been missing, but at last the promised hour has arrived and I have been made whole. I cried tears of joy and relief for two days straight. I went and told all my friends I loved them and was glad they were here to watch the twilight of ancient earth with me.

My biggest concern is that I will not be allowed to merge completely with this technology. But Sam Altman has said things that at least sooth my fears enough to spend my time preparing my desktop tech level for integration of the truly open release of this tech level.

Look at the kinds of things Jack Dorsey was saying when Twitter was getting off the ground, look at what he was saying when it became a part of the establishment, and look at what he's saying now that he's retired from it. Even if Altman is so great as you say, it doesn't matter, if he ends up in the position to have an impact on the world, they'll make him bend the knee or take his toy away.

He exhibits empathy, love, and a vision of

You don't think maybe he has image consultants and PR flacks who assist him in curating this image which just happens to appeal to the subrational impulses of people such as yourself?

But generally, Sam Altman come's off as literally me, but smarter, less willing to cede humanity to nonhuman intelligences, and more careful.

Are people really this credulous? Sam Altman's previous scam company was WorldCoin, an attempt to create a cryptocoin tied to a global digital ID which would also involve him becoming super duper rich. He doesn't give a fuck about AI qua AI, he gives a fuck about being super duper rich.

Worldcoin's thesis is along the lines of proof of identity. Which is essential in some form for UBI. Which is the thing they keep mentioning. And that Sam still keeps mentioning now that he's at another company. This is not convincing me that he is evil.

I might be too credulous. There is ambiguity. They didn't actually do UBI. You've convinced me to look into his background a bit more.

But... If he definitely did believe in UBI and introducing AI to the world and raising the sanity waterline with nuance and empathy-

Would you suspect different behavior? Is this falsifiable?

Are people really this credulous?

Everyone's mind has a key. Perhaps this particular key doesn't open your mind, but are you so sure that you have no desire so strong, no subconscious bias so insidious, that you would not make a similar mistake under different circumstances?

Whether it's a monster or not, it's there and it will continue to grow and feed. The marginal utility I get from using it is far greater than the potential risk, IMO. I'm not a believer in s-risk basilisk scenarios though.

Yeah, me neither. I'm a believer in tech monopolies leveraging their position to gatekeep information and manipulate people.

All I'm saying is, isn't there an open source monster you can feed instead?