site banner

Culture War Roundup for the week of April 7, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Tyler Cowen has a Conversation with Jennifer Pahika on Reforming Government

I will pull one little segment.

COWEN: If someone says, when it comes to regulatory reform, accountability is not the solution, it’s the problem, do you agree?

PAHLKA: Accountability is not the solution, it’s the problem?

COWEN: You put in accountability, everything has to be measured, everything has to have a process. It’s judicial review. Should we have less accountability in government?

PAHLKA: In a certain sense, I would agree with that. I don’t think in an absolute sense. I think the way that we structure accountability is very flawed. I think we are holding public servants essentially accountable to metrics that are not proxies for real outcomes that people care about. When you have a very high focus on accountability that is really about fidelity to procedure and process instead of to the actual outcome, that’s not accountability.

COWEN: Outcomes are heterogeneous, they’re tricky, they’re long term. When you ask people to measure, you end up with a lot more emphasis on process than you want. So, maybe accountability is the problem. To say accountability for outcomes — that’s just going to morph into accountability for process. That’s what I observe, even in private companies. Big, successful, profitable private companies that we’re all familiar with — they have the same problem, as I’m sure you know.

PAHLKA: That they’re held accountable to the —

COWEN: There’s far too much process, bureaucracy, delays. They’re slow. Look at construction productivity in the United States. It’s terrible. It’s declined.

PAHLKA: Yes, I would agree with that.

COWEN: And that’s the private sector.

PAHLKA: Yes. I think one of the issues though is that there is more accountability to process in government than in the private sector, I believe.

COWEN: More in government.

PAHLKA: More in government.

COWEN: Yes, sure.

PAHLKA: Because in the private sector, if you don’t get the outcomes, you are unlikely to succeed financially.

COWEN: There’s a profit — clear goal. In government, it’s not the same kind of outcome. It collapses more into process.

PAHLKA: Yes, it collapses more into process, absolutely. I think also you have — what is it — Goodhart’s Law that says once a measure becomes a target, it ceases to be useful, and you see that everywhere in government. I think that’s part of what I mean about the new public management and the reinventing government in the ’90s was highly reliant on “Let’s set a goal and follow that goal.”

There can be real value in that. I’m not discrediting it entirely, but you do have that erosion of the value of those targets as people try to meet the target without actually meeting the outcome that was intended. I think that a more digital transformation approach that is instead able to change over time more quickly and say, “Wait, this target is no longer helping us get where we wanted to go. We’ve got to iterate on that. Let’s change it.” That can really, I think, get us out of that industrial era of thinking.

I want to pull on some threads in the vein of my previous comments on military research, development, and procurement. They talked about this some, but were also talking more broadly. I think the problem to be solved is perhaps most clearly cognizable in this domain. Reordering the discussion a bit, we'll start from the outcomes, the things that we're trying to achieve:

COWEN: Outcomes are heterogeneous, they’re tricky, they’re long term.

As I put it:

we have a situation where your military is very very rarely 'tested' (in fact, ideally it is very rare). You very rarely get actual feedback. When you do, you do not have access to the counterfactual of what would have happened if you had invested differently.

Look at the lead time for something like a modern fighter jet. What's the chance that the guy who originally greenlit the program is still around to be 'accountable' if/when it's actually used in a hot conflict, such that its performance can be assessed against the competition? Do you handicap that assessment at all? He made his decision a decade ago, seeing a certain set of problems that they were trying to solve. A decade or two later, your adversaries have also been developing their own systems. Should he be punished in some way for failing to completely predict how the operating environment would change over decades? Suppose he made the decision in Year X, and it came into service in Year X+10. It hypothetically would have performed perfectly well for a decade or two, but you just never had a hot war and never saw it. By the time Year X+25 rolls around and you do get into a hot war, it's now hot garbage in comparison to what else is out there? Is he blameworthy in some way? Should he be held 'accountable' in some way? There's a good chance he's now retired or even dead, so, uh, how are you going to do that?

Obviously, there is a spectrum here, but I would argue that a modern fighter jet is more toward the middle of the spectrum than at the far end. Yes, there are plenty of faster-turnaround things, but there are also lots of long lead time things. Even just think about the components/subsystems of the fighter jet. By the time a decision is made to greenlight the larger project, most of these have to be relatively mature. The gov't and company involved can probably take some risk on some of these, but they can't do too many. They want a fair amount of subsystems that they are confident can be integrated into the design and refined within their overall project schedule. That means that all of that investment had to be done even earlier.

Back to that guy who makes the decision. Who is that? Probably a general or a political appointee. Possibly a group of gov't stakeholders. How does he decide what to buy? Remember, he's trying to predict the future, and he doesn't actually know what his adversaries are going to do in the meantime. He has no direct outcomes by which to do this. He doesn't yet have some futarchy market to somehow predict the future. He basically just has educating himself on what's out there, what's possible, what's at various stages of maturity, and where various people think stuff might be going. As I put it in the doubly-linked comment:

There will be a plethora of "experts" who have their own opinions. Some top military folks in the early 1900s will think that airplanes are just toys, while others will tell you that they can change the nature of warfare; how do you know who to believe and where to put your money?

And so, I think Tyler would claim, this fundamentally drives these decisions to be focused on process rather than outcome. The outcome isn't accessible and likely isn't going to be. Instead, people basically just implement a process to ensure that the decisionmaker(s) are talking to the right stakeholders, getting a wide variety of input, not just shoveling contracts to their buddies, etc. Sure, these decisionmakers still have some leeway to put their mark on the whole thing, but what's the plan for adding more 'accountability' to them that isn't just, "Whelp, let's make sure they go through enough process that they don't have the obvious failure modes, and then sort of hope that their personal mark on the process is generally good, because we've built up some trust in the guy(s) over some years"?

Now, think like a company or research org that is considering investing in lower maturity subsystems. It's a hellova risk to do that with such an incredibly long lead time and, frankly, a pretty low chance of having your product selected. You're going to care a lot about what that process looks like, who the relevant stakeholders/decisionmakers are, and what their proclivities are. If you're pretty confident that the guy(s) in charge mostly don't give a shit about airplanes, you're even more unlikely to invest a bunch of money in developing them or their components. Will some crazy company spent thirty years to develop a fully-formed system, getting no contracts anywhere along the way, just hoping that once the generals see it complete and in action (ish, because again, there's not a hot war and you can't really demonstrate the meaningfulness of having a thousand airplanes with your one prototype), they'll finally be 'forced' to acknowledge how effective it's going to be, finally unable to brush it off, and finally actually buy it for bazillions of dollars? I guess, maybe, sometimes. But probably not very often. Thus, I think it's pretty unlikely that the gov't can just completely wash its hands of any involvement in the research/development pipeline and just say, "Companies will always bring us fully-formed products, and we'll decide to buy the best ones." Pahlka touches on a need for the gov't to "insource" at least some parts of things:

There are rumors that DOGE is actually pro-insourcing more tech talent in government. I know you’re not hearing about that now. It may just be a rumor, and it may not be true, so don’t quote me on this. I certainly think that folks in Musk’s world came in and looked at government, and said, “How do you even operate with so little technical competence in-house? That’s crazy.” And it is crazy. They’re right about that.

We’ll either end up, I think, even further privatizing not just the software, but the whole operations as the software and the operations are increasingly melded — again, this is not new — or we’ll be forced finally to gain the internal competence that we have always needed and start to do this a little bit better.

Again, I think she's talking more broadly, but that bit about software and operations being very melded is quite poignant when thinking about military applications.

Getting back to the problem of not knowing what's going to be effective in the future, the traditional solution is to just fund pretty broadly, through multiple mechanisms. Not sure about airplanes? Have one guy/group who seem to like airplanes go ahead and fund a variety of airplane-related stuff. Have some other guy who doesn't like airplanes fund some other stuff. There's obviously a bunch of risky capital allocation questions here, and decisions ultimately have to be made. Those are tough decisions, but how do you add 'accountability' to them? I don't know. I think the easy/lazy way is basically a form of just looking at your 'guys' (your airplane guy, your submarine guy, etc.) and ask, "What have you done for me lately?" The obvious concern is that that makes all your guys focus their portfolios much more heavily toward shorter timelines. But part of the point of the government being 'eternal' is that it should be able to be thinking on longer time horizons, because that may end up being more valuable than just short time horizon things that can be more tightly linked to 'outcomes' or 'accountability'.

I started off being a bit taken aback by the idea Tyler proposed that we should almost just abandon accountability. I've generally been somewhat pro-accountability, and I know folks here have talked about it a lot. But upon reflection, with my pre-existing but not previously-connected thoughts on military procurement, it makes a bit more sense to me that there is a real tension here with no real easy solutions.

I wonder if there's not an alternative way of framing all of this, not as "should we have accountability" but rather, "must accountability be externally legible, and what are the costs and consequences if it must?"

As an example, one of the interesting things about the modern university system is it bolts two incompatible accountability systems on top of each other.

When my wife got her PhD, it was a long, grueling, intensive process. In particular, though, it was expensive in the sense that she had a world class expert in her field who paid quite a lot of attention to her during that multiyear process (she fortunately had a good and ethical advisor). And you can see (if this is working correctly) the outlines of an older system of accountability; in theory, my wife went through an intensive acculturation process by an existing cohort of experts who could, by the end of the process, vouch that my wife had internalized values and norms that meant she could be trusted by the broader cohort of researchers in her field, and thus ought to be able to independently drive a research program. That doesn't mean there's not also lots of peer review and criticism and whatever else, of course, just that she went through a process that, if it worked correctly, meant she should have an internal mechanism of accountability that meant she could be trusted, in general. All of this is much, much clearer in action if you look at universities operating many decades ago, when they had much less money, much less bureaucracy, and generally much more independence.

But clearly the current version of the University is flooded with extra deans, and administrators, and IRB reports, and massive amounts of paperwork, and giant endowments that are lawfare targets, and many layers of bureaucracy, and a bunch of arguably screwed up personal values from cultural evolution the last few decades. And many of those changes are intended to keep everyone in line and make sure everything is legible to the broader system. And so, in those spaces, the older model of producing virtuous professionals who can work cheaply by their own guidance is frequently superseded by this other "trustless society" model. And everything is slow, and expensive, and the values of the bureaucracy is often at odds with getting good work done, for all the reasons discussed in the linked conversation.

Or, to use another example, I've seen this claim made, by certain irritated black activists connected to screwed up urban neighborhoods, that there's just as much crime going on out in the white suburbs, but the cops are racist and just don't enforce laws out there. Which honestly, the first time I read that, was generally just kind of shocking and equal parts hilarious and depressing. Because of course, the entire point of going to a good suburb is that a critical mass of people have internalized an illegible, internal sense of accountability that means they mostly don't actually need cops around all that often. And everyone around them knows that about them, and about themselves. That's literally why certain people find them kind of stifling. (Obviously there are things that happen in suburbs like weed smoking or domestic abuse or whatever. But obviously we're talking about questions of degree here) Meanwhile, in distressed neighborhoods, you simply have to have cops and a legible system because a critical mass of people do not internalize that sense of accountability, and so you need the external accountability of the legible state.

Anyone who has worked in an effective small startup, versus a giant profitable corporation has almost certainly run into these same divides, I suspect.

Getting back to the question of government in this context, a few years ago, I read through Michael Knox Beran's "WASPS: The Splendors and Miseries of an American Aristocracy", which was a great book, as well as C. S. Lewis's "Abolition of Man". And they were a really nice pairing to capture some of these big questions, about whether a society needs to produce leaders who have an internal sense of morality and virtue, who try to do the right thing at any given moment based on an internally cultivated sense of accountability, versus the transition to a world where accountability is an external, entirely legible thing where independent judgement and virtue can't be relied on and instead bureaucracy and technocracy solve all problems (like, say, the way that Uber driver reviews might, as just one simple example). And I think you can find upsides and downsides to each approach.

a few years ago, I read through Michael Knox Beran's "WASPS: The Splendors and Miseries of an American Aristocracy"

Thanks for the rec! I've been thirsty for something exactly like this but didn't know where to begin looking. Serendipitous.

You might also be interested in George Marsden's "The Twilight of the American Enlightenment: The 1950s and the Crisis of Liberal Belief", Thomas Leonard's "Illiberal Reformers", and Helena Rosenblatt's "The Lost History of Liberalism: From Ancient Rome to the Twenty-First Century", all of which also cover this same era and dig into some overlapping topics and themes.

I've been trying to understand the shift from the worldview of the progressive era (where a lot of our inherited institutions were built and cemented) to... well, whatever emerged in the 60s and 70s, and all of these books were really useful for me in that regard. Leonard's book was a bit dry, but lots of great information. The other two read pretty easily, IIRC.