site banner

Culture War Roundup for the week of December 15, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

In the beginning, the C programming language was created, and there was much rejoicing. C is perhaps the single most influential language in the history of computing. It was "close to the hardware"*, it was fast*, it could do literally everything*. *Yes, I am simplifying a lot here.

But there were a few flaws. The programmer had to manage all the memory by himself, and that led to numerous security vulnerabilities in applications everywhere. Sometimes hackers exploited these vulnerabilities to the tune of several million dollars. This was bad.

But it's not like managing memory is particularly hard. It's just that with complex codebases, it's easy to miss a pointer dereference, or forget that you freed something, somewhere in potentially a million lines of code. So the greybeards said "lol git gud, just don't make mistakes."

The enlightened ones did not take this for an answer. They knew that the programmer shouldn't be burdened with micromanaging the details of memory, especially when security is at stake. Why is he allowed to call malloc without calling free?* The compiler should force him to do so. Better yet, the compiler can check the entire program for memory errors and refuse to compile, before a single unsafe line of code is ever run. *Actually memory leaks aren't usually security issues but I'm glossing over this because this post is already long.

They had discovered something profound: Absent external forces, the programmer will be lazy and choose the path of least resistance. And they created a language based on this principle. In C, you may get away with not checking the return value of a function that could error. In Rust, that is completely unacceptable and will make the compiler cry. The path of least resistance in C is to do nothing, while the path of least resistance in Rust is to handle the error.

That's what makes Rust a better programming language. And I have to agree with the zealots, they are right on this.

...So I have to be disappointed when they're not.

Rust seems to keep popping up in the news in the past couple of months. In November, a bug in Rust code deployed by Cloudflare took down their infrastructure, and half the Internet with it. (Why Cloudflare even has a monopoly on half the Internet is a controversial topic for another time.) The cause? A programmer didn't handle the error from a function.

Well that's technically not true, they did. It's just that calling .unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.

This month, a CVE was filed in the Rust part of the Linux kernel, and it turned out to be a memory corruption vulnerability, ironically enough. "But how could this happen?" Rust has these things called unsafe blocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything in unsafe if you get tired of fighting the borrow checker.

I hear the same pitch all the time from Rust advocates. "C is unsafe, programmers are too fallible, we must use a language that forces good code." They consistently blame the language, and don't blame the programmer. So how did they react to the above incidents? Did they blame the programmer, or the language?

"Oh, you just shouldn't use unwrap like that." "Duh, don't use unsafe, it's obviously unsafe." Sound familiar? They're blaming the programmer. Even Null of Kiwi Farms had this take on his podcast.

If I was one of them, I would throw my hands up and admit that the language didn't have guardrails to prevent this, so if I would blame C in a universe where the incidents happened in equivalent C code, then I should blame Rust here. But then, I wouldn't be a Rust zealot. I'd just be a Rust kinda-supporter. I'd have to carefully consider the nuances of the language and take into account various factors before forming an opinion. Oh no, the horror! And if I went the other way and blamed the programmer, it wouldn't be long before I'd have this nagging feeling that I'm just like a C-nile greybeard, telling the programmers to git gud, and at that point, there seems to be less of a point to using Rust if we just assume that programmers are infallible.

It's a Catch-22, in other words.

To be clear, I'm not saying that these incidents alone mean Rust is a bad choice for anything, ever. I'm not saying Cloudflare or Linux shouldn't use Rust. I'm not telling people what they should or shouldn't use. I'm just pointing out the double standards. Rust people can attack C all day using one set of (IMO, entirely justified) standards, but when they are confronted with these incidents, they suddenly switch to another set of standards. Or to put it more clearly, they have a motte and bailey. Motte: "Rust can't prevent shitty programmers from writing shitty code." Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"

I don't know. I find this a topic that it's pretty easy to be nuanced about. Different languages attempt to provide different guarantees to the programmer during their operation. To provide those guarantees they have to be able to understand the code and prove the code satisfies those guarantees. Most such languages provide ways to disable checking those guarantees for particular code sections on the assumption that you, the programmer, have information the compiler lacks that things will work without the compiler having to check. If you, the programmer, tell the compiler you know better and then turn out to be wrong I think it's fine to blame the programmer.

I think everyone has, in their mind, a different idea about the extent to which buggy code should be caught by the compiler and these ideas are what inform what side of the blame the programmer/blame the compiler distinction you fall on. As an example: In college a friend and I had to write some networking libraries in C. At the time we didn't use any fancy editors or anything, just good old gedit and gcc. My friend was writing a function that was supposed to perform an arithmetic operation and return the output but every time he ran it he got a different (implausible) result, even with the same inputs. What was happening is that he had accidentally omitted the return statement for his function, so he was getting back some random garbage from memory on every run. Should the C compiler let you declare a function that returns a value and then let you omit the return statement? Is that mistake your fault or the language's fault? Formally doing this is undefined behavior but that does not always mean crash!

Well that's technically not true, they did. It's just that calling .unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.

In this case I find the behavior of Option<T>.unwrap() unintuitive, but I am also coming from the perspective of exception-based error handling. As an analogy, C#'s Nullable<T>.Value will throw an exception if the nullable is actually null. That option obviously isn't available in a no-exception world. Maybe the default behavior should be more like the behavior with the try trait such that it returns the error instead of panic? Then let the programmer panic if the value is error, although that introduces another layer of error checking!

This month, a CVE was filed in the Rust part of the Linux kernel, and it turned out to be a memory corruption vulnerability, ironically enough. "But how could this happen?" Rust has these things called unsafe blocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything in unsafe if you get tired of fighting the borrow checker.

I'm a little unsure of the criticism here of Rust as a language. Is it that unsafe exists? Presumably all the code that is not in an unsafe block has guarantees that equivalent C code would not. Is that not a benefit? Is the worst case here you wrap all your Rust code in unsafe and then you end up... as good as C?

To be clear, I'm not saying that these incidents alone mean Rust is a bad choice for anything, ever. I'm not saying Cloudflare or Linux shouldn't use Rust. I'm not telling people what they should or shouldn't use. I'm just pointing out the double standards. Rust people can attack C all day using one set of (IMO, entirely justified) standards, but when they are confronted with these incidents, they suddenly switch to another set of standards. Or to put it more clearly, they have a motte and bailey. Motte: "Rust can't prevent shitty programmers from writing shitty code." Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"

I think there is a more productive discussion here about how language features and guarantees can help protect against writing buggy code and potentially making it easier to review code for bugs. I suppose I think of it by analogy to Typescript and Javascript. All Javascript is valid Typescript but Typescript needs to be compiled to Javascript. That compilation, in my experience, helps avoid whole classes of errors due to the lack of typing in Javascript. Sure you can write Javascript that just doesn't have those errors, and most people do, but Typescript renders them inexpressible. Similarly so for C and (non-unsafe) Rust.

Should the C compiler let you declare a function that returns a value and then let you omit the return statement? Is that mistake your fault or the language's fault? Formally doing this is undefined behavior but that does not always mean crash!

It's the language's fault (that probably should never have been allowed by the standard, and if it wasn't then the compiler could catch it by default) and it's your fault (you shouldn't have written that), and it's other language users' fault.

That third one might take a bit of explanation.

Any decent compiler these days will warn you about that error at compile time, and will stop the compilation if you use a flag like -Werror to turn warnings into compile-time errors. So just always use -Werror, right? We could all be writing a safer version of C without even having to change the C standard! Well, "look for functions that declared a return value but didn't return one" is an especially easy error for a compiler to catch, but there are others that are trickier but more subtle. Maybe you add -Wall to get another batch of warnings, and -Wextra with another batch, and you throw in -Wshadow and -Wunused-value and -Wcast-qual and -Wlogical-op and ... well, that's a great way to write your code, right up until you have to #include someone else's code. At some point your OCD attention to detail will exceed that of the third-party authors who wrote one of your libraries, and you can't always fault them for it (these warnings are often for code that looks wrong, whether or not it is wrong - even omitting a return statement could probably save one CPU cycle in cases where you knew the return value wasn't going to be used!). So, I have special headers now: one to throw a bunch of compiler pragmas before #include of certain third-party headers, to turn off my more paranoid warning settings before they can hit false positives, then another to turn all the warnings back on again for my own code, like a primitive version of "unsafe".

I was once paid to port C code from a system that allowed code to dereference null pointers (by just making the MMU allow that memory page and filling it with zeroes). And so the C code written for that system used that behavior, depending on foo = *bar; to set foo to 0 in cases where they should have written foo = bar ? *bar : 0; instead. As soon as you give people too much leeway, someone will use it, and from that point onward you're a bit stuck, unable to take back that leeway without breaking things for those users. I like the "nasal demons" joke about what a compiler is allowed to do when you write Undefined Behavior, but really the worst thing a compiler is allowed to do with UB is to do exactly what you expected it to, because then you think you're fine right up until the point where suddenly you're not.

This is getting off topic, but I thoroughly enjoy reading Raymond Chen's blog Old New Thing for the many stories of Windows bugs or implementation details or programmer misuses that later became compatibility constraints. When you upgrade your operating system and your Favorite Program stops working people rarely blame their Favorite Program even if it is the thing that was doing something unsupported!

https://xkcd.com/1172/

Or as a modern sage once explained:

On Sun, Dec 23, 2012 at 6:08 AM, Mauro Carvalho Chehab mchehab@redhat.com wrote:

Are you saying that pulseaudio is entering on some weird loop if the returned value is not -EINVAL? That seems a bug at pulseaudio.

Mauro, SHUT THE FUCK UP!

It's a bug alright - in the kernel. How long have you been a maintainer? And you still haven't learnt the first rule of kernel maintenance?

If a change results in user programs breaking, it's a bug in the kernel. We never EVER blame the user programs. How hard can this be to understand?

To make matters worse, commit f0ed2ce840b3 is clearly total and utter CRAP even if it didn't break applications. ENOENT is not a valid error return from an ioctl. Never has been, never will be. ENOENT means "No such file and directory", and is for path operations. ioctl's are done on files that have already been opened, there's no way in hell that ENOENT would ever be valid.

So, on a first glance, this doesn't sound like a regression, but, instead, it looks tha pulseaudio/tumbleweed has some serious bugs and/or regressions.

Shut up, Mauro. And I don't ever want to hear that kind of obvious garbage and idiocy from a kernel maintainer again. Seriously.

I'd wait for Rafael's patch to go through you, but I have another error report in my mailbox of all KDE media applications being broken by v3.8-rc1, and I bet it's the same kernel bug. And you've shown yourself to not be competent in this issue, so I'll apply it directly and immediately myself.

WE DO NOT BREAK USERSPACE!

Seriously. How hard is this rule to understand? We particularly don't break user space with TOTAL CRAP. I'm angry, because your whole email was so horribly wrong, and the patch that broke things was so obviously crap. The whole patch is incredibly broken shit. It adds an insane error code (ENOENT), and then because it's so insane, it adds a few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").

The fact that you then try to make excuses for breaking user space, and blaming some external program that used to work, is just shameful. It's not how we work.

Fix your f*cking "compliance tool", because it is obviously broken. And fix your approach to kernel programming.

It's a little interesting to contrast this with my perception of Chen's attitude. He clearly was dedicated to making sure software that used to work would continue to work for users. It is basically never the software users fault that the program they bought did things wrong. On the other hand, he has palpable contempt for the developers of user-mode software that took a dependency on some undefined or non-contractual behavior and created these compatibility constraints. Ex: Application compatibility layers are there for the customer, not for the program

Some time ago, a customer asked this curious question (paraphrased, as always):

Hi, we have a program that was originally designed for Windows XP and Windows Server 2003, but we found that it runs into difficulties on Windows Vista. We’ve found that if we set the program into Windows XP compatibility mode, then the program runs fine on Windows Vista. What changes do we need to make to our installer so that when the user runs it on Windows Vista, it automatically runs in Windows XP compatibility mode?

Don’t touch that knob; the knob is there for the customer, not for the program. And it’s there to clean up after your mistakes, not to let you hide behind them.

It’s like saying, “I normally toss my garbage on the sidewalk in front of the pet store, and every morning, when they open up, somebody sweeps up the garbage and tosses it into the trash. But the pet store isn’t open on Sundays, so on Sundays, the garbage just sits there. How can I get the pet store to open on Sundays, too?”

The correct thing to do is to figure out what your program is doing wrong and fix it. You can use the Application Compatibility Toolkit to see all of the fixes that go into the Windows XP compatibility layer, then apply them one at a time until you find the one that gets your program running again. For example, if you find that your program runs fine once you apply the VersionLie shim, then go and fix your program’s operating system version checks.

But don’t keep throwing garbage on the street.