site banner

Culture War Roundup for the week of November 28, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

If significant parts of your work can be represented as a cognitively taxing transformation of a symbol sequence into some other symbol sequence – you should start thinking how it feels to be on the receiving end of those arguments.

I've been thinking on this matter for a little under two years now, whenever GPT-3 came out.

Consider the field of law, where easily 90% of the industry is "cognitively taxing transformation of a symbol sequence into some other symbol sequence", and there's already an existing, HUGE databank on which to train possible models to handle legal work, specifically.

My honest bet is that any student currently in their first year of Law School will be unable to compete with AI legal services by the time they graduate. Certainly not on cost. The AI didn't incur 5-6 figure loans for it's legal training.

Put another way, the AI will be as competent/capable as a first-year associate at a law firm inside 3 years.

If you are considering law school as a career choice, stop. If you're currently in law school (and don't have a job locked down), drop out, or angle for a job outside the field.

Any field where the writing is less cognitively demanding than law will also be on the chopping block.

How can I get a date with a hot girl?

Interesting choice on that question, given another rather dystopic prediction I'm currently making:

There will be AI bots which are specifically tailored to chat with women on dating apps and convince them to go on a date. And they will be really good at it. Hell, since image recognition is already a solved problem with AI, it'll probably be able to scan all the photos of a given potential match and select only those that have the features the guy finds attractive and then chats them up.

I don't know how the average woman would react to learning that she thought she was getting attention from like twenty attractive dudes at once but in reality they were letting the AI chat her up while they were lifting or playing video games and only got pinged when she either agreed to a time and place for a date or sent a nude.

This based on the news that AI can now beat humans at Diplomacy using straightforward negotiation tactics.

Given the current state of the dating market, this application feels inevitable.

Consider the field of law

Consider how much less efficient the practice of corporate law was before the advent of the word processor. As a result, merger agreements used to be just a few pages long. With a modern word processor and a database of electronic precedents, a law partner could bang one of these out in no time. The legal profession's response to this efficiency windfall was not to slash law firm staff, but to increase the length and complexity of merger agreements. Now they're like 100 pages long, plus hundreds of pages of other crap, and they are expected to be drafted with full knowledge of a much vaster corpus of law and jurisprudence.

So I suspect that further efficiency gains will simply raise the complexity ceiling rather than reducing the size of the industry. We could see thousand-page merger agreements of increasing variety, vastly more intricate negotiated horse-trading over terms previously accepted as boilerplate, and increasing rigor in sourcing each provision to ever more obscure and fact-specific legal doctrines.

I think the law students' jobs are safe, or at least as safe as they ever were.

Interesting, so your position is that large law firms would continue hiring law grads into minor positions at $235,000.00/year even if they can achieve significant cost savings (possibly losing some efficaccy) by utilizing an AI?

My position is that they wouldn't achieve significant cost savings, because as they become more efficient in producing high quality legal documents, the quality expectations of the industry would increase by approximately the same percentage.

The legal profession is predominantly a red queen's race. It produces some objective benefit in promoting clarity and discipline within and between organizations, but everything beyond that is zero-sum -- trying to get better terms or a better dispute settlement at a counterparty's expense, trying to cover off an edge case where the default rule might favor your counterparty marginally better than your proposed customized solution -- and my suspicion is that the latter category is the bulk of the profession. Through that lens, the amount that a corporation spends on legal fees is more akin to the amount that a country spends on its military than the amount it spends on agriculture. When technology makes militaries more efficient, the result isn't less military spending on either side of a border, it's scarier militaries staring each other down across that border.

I don't see how your position supports the conclusion that "law students' jobs are safe," only that "law firms will continue to be profitable and entrenched in the corporate ecosystem."

Which I agree with. I just expect that law firm partners will exploit the ability to produce more billable time whilst paying less to their associates.

And this will likely trigger even harsher competition amongst small firms/solo practitioners since an AI that can produce most basic legal documents after a brief discussion with a potential client can be used to corporatize this aspect of the practice.

How does a firm justify billing $300/hour to a non corporate client when the AI-based firm up the street can produce similar quality work for <$100 total?

People are still force multipliers. What the GP is saying is that companies that employ lots of drones and lots of AI will provide better results than just the firms with ai or drones. So eventually big law will employ lots of drones and AI - in arms race no advantage is permanent.

People are still force multipliers.

The IQ waterline above which a given person is reliably better at performing given tasks than an AI will probably rise by a couple IQ points a year, is the problem.

There will certainly still be a place for the humans thus displaced, it just won't be in any of the fields where skill is the determining factor and the AIs are higher skilled.

I mean, people still like to watch Magnus Carlsen play chess, but he could be beaten by a chess program running on a four-year-old smartphone.

There will certainly still be a place for the humans thus displaced, it just won't be in any of the fields where skill is the determining factor and the AIs are higher skilled.

As an amusing thought experiment, consider trying to explain modern economics to someone from a society just coming upon the division of labor:

"You mean to tell me that only 1% of your population has to work to feed everyone? That sounds great! Imagine how much everyone must enjoy all of that free time!"

Needless to say, that isn't how it actually went, and I expect AI to be similar: we'll find something else in which to spend our time and raise our expected standards of living to match.

The two questions we could break it down to are:

Is this the equivalent of the invention of the car in terms of it's impact on the horse drawn carriage?

and

Are we the HORSE in this scenario?

Once automobiles became strictly better than horses for the majority of tasks horses were used for, what happened to horse employment?

More comments