site banner

Culture War Roundup for the week of December 15, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

In the beginning, the C programming language was created, and there was much rejoicing. C is perhaps the single most influential language in the history of computing. It was "close to the hardware"*, it was fast*, it could do literally everything*. *Yes, I am simplifying a lot here.

But there were a few flaws. The programmer had to manage all the memory by himself, and that led to numerous security vulnerabilities in applications everywhere. Sometimes hackers exploited these vulnerabilities to the tune of several million dollars. This was bad.

But it's not like managing memory is particularly hard. It's just that with complex codebases, it's easy to miss a pointer dereference, or forget that you freed something, somewhere in potentially a million lines of code. So the greybeards said "lol git gud, just don't make mistakes."

The enlightened ones did not take this for an answer. They knew that the programmer shouldn't be burdened with micromanaging the details of memory, especially when security is at stake. Why is he allowed to call malloc without calling free?* The compiler should force him to do so. Better yet, the compiler can check the entire program for memory errors and refuse to compile, before a single unsafe line of code is ever run. *Actually memory leaks aren't usually security issues but I'm glossing over this because this post is already long.

They had discovered something profound: Absent external forces, the programmer will be lazy and choose the path of least resistance. And they created a language based on this principle. In C, you may get away with not checking the return value of a function that could error. In Rust, that is completely unacceptable and will make the compiler cry. The path of least resistance in C is to do nothing, while the path of least resistance in Rust is to handle the error.

That's what makes Rust a better programming language. And I have to agree with the zealots, they are right on this.

...So I have to be disappointed when they're not.

Rust seems to keep popping up in the news in the past couple of months. In November, a bug in Rust code deployed by Cloudflare took down their infrastructure, and half the Internet with it. (Why Cloudflare even has a monopoly on half the Internet is a controversial topic for another time.) The cause? A programmer didn't handle the error from a function.

Well that's technically not true, they did. It's just that calling .unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.

This month, a CVE was filed in the Rust part of the Linux kernel, and it turned out to be a memory corruption vulnerability, ironically enough. "But how could this happen?" Rust has these things called unsafe blocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything in unsafe if you get tired of fighting the borrow checker.

I hear the same pitch all the time from Rust advocates. "C is unsafe, programmers are too fallible, we must use a language that forces good code." They consistently blame the language, and don't blame the programmer. So how did they react to the above incidents? Did they blame the programmer, or the language?

"Oh, you just shouldn't use unwrap like that." "Duh, don't use unsafe, it's obviously unsafe." Sound familiar? They're blaming the programmer. Even Null of Kiwi Farms had this take on his podcast.

If I was one of them, I would throw my hands up and admit that the language didn't have guardrails to prevent this, so if I would blame C in a universe where the incidents happened in equivalent C code, then I should blame Rust here. But then, I wouldn't be a Rust zealot. I'd just be a Rust kinda-supporter. I'd have to carefully consider the nuances of the language and take into account various factors before forming an opinion. Oh no, the horror! And if I went the other way and blamed the programmer, it wouldn't be long before I'd have this nagging feeling that I'm just like a C-nile greybeard, telling the programmers to git gud, and at that point, there seems to be less of a point to using Rust if we just assume that programmers are infallible.

It's a Catch-22, in other words.

To be clear, I'm not saying that these incidents alone mean Rust is a bad choice for anything, ever. I'm not saying Cloudflare or Linux shouldn't use Rust. I'm not telling people what they should or shouldn't use. I'm just pointing out the double standards. Rust people can attack C all day using one set of (IMO, entirely justified) standards, but when they are confronted with these incidents, they suddenly switch to another set of standards. Or to put it more clearly, they have a motte and bailey. Motte: "Rust can't prevent shitty programmers from writing shitty code." Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"

An engineering organization doesn't have to accept the default compiler behavior for a language. They can use linters and other tools to restrict (or expand!) the kind of code that's acceptable. And they can have a culture that values thoroughness or that values moving fast and breaking things.

I think the best argument for something like Rust is that it makes it easier to guarantee quality where it matters to the organization. If quality doesn't matter to the organization, whether explicitly through tooling and coding standards or implicitly through seeing what gets people promoted or fired, then people will circumvent safeguards whatever language it is.

There lies the rub, though: the way Rust is being introduced defeats the best (and possibly only) argument for it. If they wanted to move fast and break thing, they can just stick to C. Hell in some of these cases the C code is even the thorough option as it's already been in use for many years, so it's well tested. Rewriting it in a completely new language, marketed entirely on memory safety, only to disable the safety features throughout the codebase is supposed to achieve what, exactly?

As a former haskell dev this reminds me why pure functional languages are uncommon in production. Pure functional languages are amazing 90% of the time but are a disaster 10% of the time. Since the 10% can derail a project people don't want to use them. The solution has been integrating functional features into multiparadigm languages so that devs can write 90% functional style code and then use imperative code where functional code just doesn't work well.

Rust's memory safety is great 90% of the time and becomes a blocker 10% of the time. A combination of using rust's memory features and unsafe operations allows for high flexibility and relatively high memory safety.

I don't know if the split is going to be 90/10 when you're messing around with the kernel. Also, when you're rewriting old code from scratch, the risk of introducing new bugs is pretty high. When you want to replace something that's been in production for years, if not decades, you'll need a better argument than "it's perfectly safe 90% of the time".