This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Culture War appropriate? Okay, I'm hearing everything about how AI is coming for our jerbs/gonna make us so productive, eventually only six humans will be working in the entire economy and the rest of us will be livin' large on that sweet, sweet UBI from all the yuuuuge economic gains.
But what does that mean in actuality if I'm not a software engineer type?
For example, a Substack comedian (literally, that was his day job) has a post up about AI and how this is all hysteria, nobody is going to lose their jobs, it's merely the usual sort of dip in the economy and the sectors most affected are:
Okay. I fall into the "secretaries and administrative assistants" bucket and I know Sweet Fanny Adams about AI. My exposure to it in the workplace is with the free Copilot Microsoft has bundled in with Microsoft 365 and, apart from annoying me with "Do you want me to write that email?" (no thanks, I think I can figure how to say "I got that invoice, thanks" all by my little ownsome), I see no use for it.
But! AI is going to be the wave of the workplace future! So, for all you who know and use the thing and are up on the different models, here's an example of a task I routinely need to do in my job. Can AI (Copilot or whatever) do this, or most of it, for me?
A request from our auditors:
Where this information is located:
How do I use AI to take all this drudgery off my hands? Can I ask/tell it "here's the details of how to log in, now go ahead lil' Copilot and pull out all that info and make a nice, tidy spreadsheet out of it all"? Or do I have to hand-hold it every step of the way, in which case I am just as well off to do it all myself?
I had my first "AI at work" experience the other day when I sat through a luncheon meeting presented by a rep for one of the big legal research companies. It was billed as a continuing education event but was really just a sales pitch for their AI products. The guy was able to cite two uses for AI in the legal field:
That's all well and dandy, but I don't do either of those things very often. This wasn't presented as "the technology is quickly changing and you'll be able to do more in the future" as much as "this is all you can do within the bounds of ethics and without exposing yourself to a malpractice suit". The idea that law firms will consist of a few partners handling a suspiciously large number of cases by prompting AI to generate outputs is pure fantasy. The people who think that AI will take over everything do so on the assumption that all work boils down to a set of deliverables that simply need to be generated, when that's not the case. If I'm looking to generate deliverables, I can already have a paralegal do all the drafting and research and just put my name on it, because there's nothing that says you need a law license to do legal research or draft documents.
What the client is paying for is for someone to take responsibility for the case, and it would be irresponsible of me to "handle" a case about which I knew nothing. Most of my time is spent reviewing and analyzing facts. Sure, an AI may theoretically be possible that can determine what's relevant and formulate a strategy better than I can, but the AI is not going to be responsible for its output. I'm never going to trust AI with tasks I wouldn't trust to support staff, no matter how much I trust my support staff (and they're great, btw), because the client doesn't want to hear about how it's the paralegal's fault. If I allow AI to do all my work for me, and I go into negotiations missing something, that's a pretty big matzo ball hanging out there. It's not that I'm perfect, or even necessarily better than AI theoretically could be, but the client is ultimately trusting me to make the relevant decisions, and I can't make them without a thorough knowledge of the case. It's the same problem with autonomous vehicles. I said a decade ago that they would never catch on, not because of any technical limitation, but because auto manufacturers aren't going to take responsibility for them. We've already seen this with Tesla being very aggresive in their defense of lawsuits stemming from autopilot. I don't necessarily disagree with Tesla's stance on this as things stand now, but if a vehicle is truly autonomous then an accident isn't caused by negligence on the part of the driver but on products liability on the part of the seller and manufacturer. As long as auto makers take the stance that the owners of vehicles are ultimately responsible for them, true AVs will never exist.
The other big issue is data security. You can tell me all day long about how great Claude, or ChatGTP, or Gemini are, but in the legal world using any of these is a complete nonstarter. Any lawsuit is going to deal with confidential data, and some suits are going to deal with little but confidential data. At the very minimum, we need to use settlement histories to evaluate potential settlement value of a case. Google literally built its business around data harvesting, and the tech sector as a whole doesn't have a stellar reputation for protecting client data. Regardless of whatever "opt out" provisions are allegedly in place, no law firm in their right mind would take the risk of feeding reams of data into a chatbot if there's any risk whatsoever that that information will show up later in a chatbot response. And no, this isn't the same as companies feeding their proprietary code bases into chatbots; the code's confidential status is subject to the discretion of management. An attorney does not have the discretion to reveal confidential information, especially if that information will be harmful to the client in the wrong hands.
This is before you even get to the fact that the current technology is underwhelming even for legal research. It looks good in demos but as soon as you try to use it for anything it proves its inadequacy. For document summarizing, 5,000 pages sounds generous, but it's rare that I'm concerned about finding information in a single document. I said this in a comment last week, but the utility would be more like "search all the depositions we have on file and pull all the ones where a witness testified about X". Well, we have tens of thousands of depositions on file, most in PDF but some in a special format used for court transcriptions. Conservatively assuming 100 pages per depo and 140 words per page, that comes out to something like half a billion tokens of context required, before we even consider that PDFs take more tokens than plain text, and a lot more if they haven't been OCR'd (which most of these haven't). Even the document functions described by the sales rep weren't that good; the example he gave was that if you were searching medical records for mentions of cancer it could broaden the search to include mentions of specific cancers.
More options
Context Copy link
More options
Context Copy link