site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

As @2rafa and others have mentioned, ML will be a step change in how human society creates value and interacts with the world more generally. Once we've achieved AGI, roughly defined as having an AI that can act at the level of an ordinary human, our ability to solve problems will drastically increase.

Intelligence is the generic solver for essentially any problem. Sam Altman himself has said that by 2030 he envisions a world where every product and service will either have integrated intelligence, or be angling towards that. This means that our phones, laptops, PCs, will all obviously be intelligent. However what most people don't realize is this technology will also effect our coffeemakers, stoves, thermostats, glasses, and practically every other technology you can think of. I'm sure adaptive clothing will exist soon with camoflauge like capabilities. People will be able to get realtime instructions into headphones telling them exactly how to complete each task.

Even these predictions only scratch the surface. If the true promise of AGI comes out it will also let us break through issues in hard mathematics, create brand new drugs, find extremely dense and powerful new materials. It will help navigate endless layers of bureaucracy, effortlessly pruning through the thousands of regulations that hold up large projects, helping us pinpoint ruthlessly where cost is added to solve the cost disease problem, and generally help unstick our public works. We could be building scintillating skyscrapers of filament-thin materials with bridges across the sky that glisten in the air, all in a decade. The future is truly difficult to even envisage, let alone predict.


In terms of comparisons to other revolutions, @2rafa says below:

There is a (relatively persuasive) case to be made that the invention of agriculture led to a decline in the quality of life for the vast majority of human beings that lasted until the late 19th or early 20th century. It took 11,900 years for the neolithic revolution to pay quality of life dividends, in other words. We can only hope that the period of relative decline in quality of life is shorter this time round, or perhaps avoidable altogether.

I agree that the agricultural revolution led to issues, a la Scott Alexander's review of Against the Grain.. That being said, I find the comparison of the AI revolution to agriculture as facile. Ultimately the reason the agricultural revolution proved bad for us was that we shifted our lifestyles from nomadic culture to a static culture - which inherently leads to problems of physical fitness, freedom, social control, and cultural institutions have to rapidly shift.

With the AI revolution, we have no idea how far it will go. The possibility space is far beyond what could have existed for any previous revolution. As doomers say, we could all die. We could all transcend our fleshly forms and become gods in ten years. China may create an ASI and lock us all into a totalitarian doom state forever.

The stakes here are far higher than the agricultural revolution, and I highly doubt our situation will parallel that trajectory.

At the end of the day if we can survive the AI revolution without any horrible outcomes of the x-risk or s-risk variety, I think it would be ridiculous to posit any sort of negative future. With intelligence at our fingertips, we will be able to finally achieve our potential as a species.

People will be able to get realtime instructions into headphones telling them exactly how to complete each task.

Where have I heard this one before?

Seriously, this seems too specific to be a coincidence. Was it a deliberate reference?

Thought your link led to this

Nope. I actually don’t like referencing that story because I think it’s pretty short sighted, although does have some interesting ideas.

This is a very common thought in any hard sci fi that has AI. Manna is by no means original just popular in the rat sphere.