site banner

Culture War Roundup for the week of March 2, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

I honestly don't understand the glee with which AI promoters predict that 50% of all "knowledge jobs" will disappear within a year. Hell, the Chief Legal Officer of Anthropic went to Stanford Law School earlier this year and basically told the students that they should all drop out.

People keep accusing them of having glee at this but I don't really see any of this glee. They seem sober and worried about this happening and are practically begging policymakers to come up with some frameworks for how to cope with that future. The same people who accuse these claims of being gleeful then go on to say that saying policy needs to be set up to cope with mass unemployment is hype rather than genuine concern. I don't like defending particularly altman of all people but you really do seem to have them in an impossible position. What can someone truthfully say if they believe AGI is possibly imminent?

The problem I have is that they don't act like they believe AGI is imminent. They say they do because they have to; if they didn't then people would stop giving them money. Just take the legal industry; Anthropic released a report earlier this year that claimed 88% of all legal tasks could be automated by AI, though only a small percentage of those tasks were actually being automated by Anthropic's customers. Meanwhile, they're telling students at a top law school that they should learn to splice cable or something because first year associate jobs will be automated away. Aside from the confidentiality concerns of Anthropic monitoring law firm AI use, and the fact that first year associates have been useless for as long as they've existed, Anthropic's own hiring practices do not suggest that 88% of legal work can be automated away by AI.

I can't find reliable totals for how many lawyers Anthropic employs, but they hired 24 last summer, and I'm sure they had some on the payroll prior to that. A gander at their website also shows several open positions, though these all have different titles and multiple offices listed, so it might be more of a constantly hiring situation. I can't find reliable estimates on their total employee count, but I've seen everything from 2500 to 4500 employees. If they currently have 30 lawyers working for them and 3,000 total employees, that's one lawyer for every 100 employees. That's, to put it mildly, and insane ratio. For comparison, Wal-Mart has 155 in-house attorneys and 2.1 million total employees. FedEx has 60 in-house attorneys for 370,000 US employees. Tech companies have higher ratios, but not that high; Apple and Google are in the 1/200–300 range. These numbers are estimates, of course, and I'm not trying to make the argument that Anthropic doesn't need all these lawyers or that they're hiring more than necessary. My point is that AI doesn't seem to have reduced their reliance on in-house attorneys in comparison to other companies, and this is at a company that should, and supposedly is, having their attorneys make extensive use of their AI tools.

The other thing is that when you look at these job openings, they all have extensive experience requirements. The lowest I saw was 3 years experience, and a few required 10 to 12 years. This is common for in-house positions. There were also a bunch of oddly specific experience requirements, which are often more in the "nice to have" category than anything else. The one requirement that was common to all positions and obviously non-negotiable is that the candidate have an active license in at least one state. Now, I am licensed in three states, and meet absolutely none of the other requirements, though I have been working for 10 to 12 years in wholly unrelated fields. Something tells me that if I were to apply for one of these jobs and somehow got an interview, telling the hiring team that I had mad AI skillz that would allow me to complete 88% of my work and get up to speed on the remaining 12% quickly would not impress them. Then again, being a true believer was one of the requirements, so who knows.

Can you lay out exactly what you'd expect them to be doing if they thought AI was imminent? I don't really think they'd be bothering worrying about pinching the salary of 30 employees if they thought ai was imminent. I also don't think in house lawyers really scales the way you're implying. 30 lawyers gives you what? 3 teams of lawyers? You're doing a lot of lobbying because you're a major player in new tech so one of those teams is your lobbying arm, one is working on corporate mergers and acquisitions(I'm sure they're trying to buy some kind of image model team), and one is probably cooking up stuff on what they're liable for/keep the lights on legal work. It's just not the kind of thing you scale linearly with employees.

I'm not saying they have too many lawyers. I'm saying that if their products were as good as they claim they are, they'd be able to make do with fewer lawyers. They claim 88% of legal tasks can be automated, and legal employees are among the most expensive. What kind of advertising is that? You can use our software to automate your legal work and save! Except we have more lawyers on the payroll than the industry average, and when litigating we hire white shoe firms whose lawyers are of the type who have their secretaries print things out for them. If the technology isn't saving Anthropic any money then why should we believe it will save anyone else money?

You can cite all the reasons why you think Anthropic needs a bigger legal department, and maybe they do, but keep in mind that there are other companies that have other unique issues that Anthropic doesn't have to deal with. For instance, they don't get sued all that often. I represent a subsidiary of a global machinery company based in Japan that got sued a dozen times last month. For one thing. In one jurisdiction. They're getting sued somewhere, for something, multiple times per day. The US arm of the parent company, whom you've certainly heard of, has five people in its in-house legal department. To be fair to Anthropic, once a company starts getting sued constantly they usually hire national coordinating counsel to manage their litigation for them, but they still have to prepare assignments to local counsel and accept service, and do all the other boring things that come with the territory, as well as monitor the litigation and grant settlement authority.

Anyway, of the six openings they're advertising, two deal with vendor contracts, one with datacenter construction, one with customer contracts, one with international compliance and one with "frontier" issues, i.e. problems that don't exist yet and don't have clear answers. M&A and lobbying are the kinds of things that get contracted out and that the in-house team doesn't do much hands-on work with. It's more like the counsel would occasionally meet with/provide reports to a senior member of the legal team, maybe a junior member occasionally supervising it, but not something anyone is doing full time.

If they currently have 30 lawyers working for them and 3,000 total employees, that's one lawyer for every 100 employees. That's, to put it mildly, and insane ratio.

I think a lot of this comes down to the fact that nobody really has no idea where risk is going to be priced in to these business models.

All the AI companies are trying to push it onto the end user to the maximum extent possible - they'd like to keep humans around to function as accountability sinks and not much else. That works great if you accept that the "agent" isn't intelligent and has no agency.

The thing is, if you're claiming that your models are so self-aware that they deserve their own retirement plan, sooner or later somebody's going to believe it and claim that either the AI or the corporation has some form of liability for it. That's some incredibly novel legal ground, and I wouldn't doubt that a fairly large number of those lawyers are wargaming defenses to that right now.

I understand what you're saying, but I've actually looked at the job openings, and they're nothing like that. Of six openings, exactly one, [Frontier Counsel], is involved with unusual, cutting edge issues. The rest are just boring stuff like contracts and datacenter construction. And this position appears to be new; Deputy Counsel has an announcement of the opening on her Linkedin from 3 weeks ago, and it may or may not be filled yet, so it's unclear if there is even anyone dedicated to this full-time at present.

Interesting. How complex are these contracts that they need that many lawyers to handle them?

I have no idea. The odd thing is that one of the tasks they specifically advertise their AI for is contract evaluation. I'm not a contract lawyer so I'm in no position to comment, though I wouldn't be surprised if the service they're offering does something that lawyers don't have to do. One of the things that I chuckle about is that they say AI can draft documents. I'm sure it can, but that's kind of irrelevant. I draft a lot of motions, but I'm not reinventing the wheel every time. Usually I have my secretary find a similar motion, change the case caption, and spend 1/2 hour to an hour editing it to fit with the facts of the current case. I don't see how it would save any time by entering those facts into the AI prompt instead, and I can easily see how it could take more time since I'd now have to review the entire document in greater detail so I understood what I was filing, rather than, say, assume that my secretary hadn't touched the part where I explain the summary judgment standard.

Stop being vague and start thinking about specifics. If there’s going to be UBI, how is it going to be paid for, how is it going to be distributed, how do the economics of the whole thing work?

AGI euphoria promoters have been much more vague about the post revolution economy than even Marx was in the mid-19th century. “Yeah man everyone will get their $2k a month in welfare bux, you will live in a nice pod and crochet all day or something, this will all happen with minimal social upheaval and the economics will work themselves out”.

The tech people aren't actually the government and can't decide how these questions are answered. You're asking the wrong people for solutions. All they can do is warn what is coming and make suggestions. Which you guys consistently characterize as glee and hype mongering. What guarantees can Dario make about the structure of redistribution that Trump or his successor will implement? Do you not see that this is an impossible ask?

He can think about the consequences of his technological innovation on society. This is something we ask of many creators; it is fair to ask Mark Zuckerberg if he thinks social media is harmful or what should be done about its negative impact on children or whoever (and indeed this is something Meta at least pretends to care about)

Yes, he can think about them and he in fact is viewable in many interviews and on several podcasts going on about them. But he's not the government, it isn't his role to propose specific policy.