site banner

Culture War Roundup for the week of October 3, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

24
Jump in the discussion.

No email address required.

There is a sense in which the job of coding has already been automated away several times. For instance, high-level languages enable a single programmer to accomplish work that would be out of the grasp of even a dozen assembly-language programmers. (This did, in fact, trash the job market for assembly-language programmers.)

The reason this hasn't resulted in an actual decline in programmer jobs over time is because each time a major tool is invented that makes programming easier (or eliminates the necessity for it in particular domains), people immediately set their sights on more-difficult tasks that were considered impractical or impossible in the previous paradigm.

I don't really see the mechanism by which AI-assisted programming is different in this way. Sure, it means a subset of programming problems will no longer be done by humans. That just means humans will be freed to work on programming and engineering problems that AI can't do, or at least can't do yet; and they'll have the assistance of the AI programmers that automated away their previous jobs.

And if there are no more engineering or programming problems like that, then you now have Automated Luxury Space Communism.

Roughly speaking, I see your point and agree that it's possible we're just climbing a step further up on an infinite ladder of "things to do with computers".

But I disagree that it's the most likely outcome, because:

  1. I think the continued expansion of the domain space for individual programmers can be partially attributed to Moore's Law. More Is Different; a JavaScript equivalent could've easily been developed in the 80s but simply wasn't because there wasn't enough computational slack at the time for a sandboxed garbage collected asyncronous scripting language to run complex enterprise graphical applications. Without the regular growth in computational power, I expect innovations to slow.

  2. Cognitive limits. Say a full stack developer gets to finish their work in 10% of the time. Okay, now what? Are they going to spin up a completely different project? Make a fuzzer, a GAN, an SAT solver, all for fun? The future ability of AI tools to spin up entire codebases on demand does not help in the human learning process of figuring out what actually needs to be done. And if someone makes a language model to fix that problem, then domain knowledge becomes irrelevant and everyone (and thus no one) becomes a programmer.

  3. I think, regardless of AI, that the industry is oversaturated and due for mass layoffs. There are currently weak trends pointing in this direction, but I wouldn't blame anyone for continuing to bet on its growth.

For (1), what you're saying is certainly true; the better abstractions and better tooling has been accompanied by growth in hardware fundamentals that cannot be reasonably expected to continue.

(2) is where I'm a lot more skeptical. A sufficient-- though certainly not necessary-- condition for a valuable software project is identifying a thing that requires human labor that a computer could, potentially, be doing instead.

The reason I called out robotics specifically is because, yeah, if you think about "software" as just meaning "stuff that runs on a desktop computer", well, there's lots of spheres of human activity that occur away from a computer. But the field of robotics represents the set of things that computers can be made to do in the real world.

That being so, if non-robotics software becomes trivial to write I expect we are in one of four possible worlds:

World one: General-purpose robotics-- for example, building robots that plant and harvest crops-- is possible for (AI-assisted) human programmers to do, but it's intrinsically really hard even with AI support, so human programmers/engineers still have to be employed to do it. This seems like a plausible world that we could exist in, and seems basically similar to our current world except that the programmer-gold-rush is in robotics instead of web apps.

World two: General-purpose robotics is really easy for non-programmers if you just make an AI do the robot programming. That means "programming" stops being especially lucrative as a profession, since programming has been automated away. It also means that every other job has been (or will very soon be) automated away. This is Fully-Automated Luxury Space Communism world, and also seems broadly plausible.

World three: General-purpose robotics is impossible at human or AI levels of cognition, but non-robotics AI-assisted programming is otherwise trivial. I acknowledge this is a world where mass layoffs of programmers would occur and that this would be a problem for us. I also do not think this is a very likely scenario; general-purpose robotics is very hard but I have no specific reason to believe it's impossible, especially if AI software development has advanced to the point where almost all other programming is trivial.

World four: World two, except somebody screwed up the programming on one of their robot-programming AIs such that it murders everyone instead of performing useful labor. This strikes me as another plausible outcome.

Are there possibilities I'm missing that seem to you reasonably likely?

For your point (3), I have no particular expectations or insight one way or another.

Hi, I just want to leave a stub response: you seem right and I failed to type a recent response after reading 2 days ago.