This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
In the beginning, the C programming language was created, and there was much rejoicing. C is perhaps the single most influential language in the history of computing. It was "close to the hardware"*, it was fast*, it could do literally everything*. *Yes, I am simplifying a lot here.
But there were a few flaws. The programmer had to manage all the memory by himself, and that led to numerous security vulnerabilities in applications everywhere. Sometimes hackers exploited these vulnerabilities to the tune of several million dollars. This was bad.
But it's not like managing memory is particularly hard. It's just that with complex codebases, it's easy to miss a pointer dereference, or forget that you freed something, somewhere in potentially a million lines of code. So the greybeards said "lol git gud, just don't make mistakes."
The enlightened ones did not take this for an answer. They knew that the programmer shouldn't be burdened with micromanaging the details of memory, especially when security is at stake. Why is he allowed to call
mallocwithout callingfree?* The compiler should force him to do so. Better yet, the compiler can check the entire program for memory errors and refuse to compile, before a single unsafe line of code is ever run. *Actually memory leaks aren't usually security issues but I'm glossing over this because this post is already long.They had discovered something profound: Absent external forces, the programmer will be lazy and choose the path of least resistance. And they created a language based on this principle. In C, you may get away with not checking the return value of a function that could error. In Rust, that is completely unacceptable and will make the compiler cry. The path of least resistance in C is to do nothing, while the path of least resistance in Rust is to handle the error.
That's what makes Rust a better programming language. And I have to agree with the zealots, they are right on this.
...So I have to be disappointed when they're not.
Rust seems to keep popping up in the news in the past couple of months. In November, a bug in Rust code deployed by Cloudflare took down their infrastructure, and half the Internet with it. (Why Cloudflare even has a monopoly on half the Internet is a controversial topic for another time.) The cause? A programmer didn't handle the error from a function.
Well that's technically not true, they did. It's just that calling
.unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.This month, a CVE was filed in the Rust part of the Linux kernel, and it turned out to be a memory corruption vulnerability, ironically enough. "But how could this happen?" Rust has these things called
unsafeblocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything inunsafeif you get tired of fighting the borrow checker.I hear the same pitch all the time from Rust advocates. "C is unsafe, programmers are too fallible, we must use a language that forces good code." They consistently blame the language, and don't blame the programmer. So how did they react to the above incidents? Did they blame the programmer, or the language?
unwraplike that." "Duh, don't useunsafe, it's obviously unsafe."If I was one of them, I would throw my hands up and admit that the language didn't have guardrails to prevent this, so if I would blame C in a universe where the incidents happened in equivalent C code, then I should blame Rust here. But then, I wouldn't be a Rust zealot. I'd just be a Rust kinda-supporter. I'd have to carefully consider the nuances of the language and take into account various factors before forming an opinion. Oh no, the horror! And if I went the other way and blamed the programmer, it wouldn't be long before I'd have this nagging feeling that I'm just like a C-nile greybeard, telling the programmers to git gud, and at that point, there seems to be less of a point to using Rust if we just assume that programmers are infallible.
It's a Catch-22, in other words.
To be clear, I'm not saying that these incidents alone mean Rust is a bad choice for anything, ever. I'm not saying Cloudflare or Linux shouldn't use Rust. I'm not telling people what they should or shouldn't use. I'm just pointing out the double standards. Rust people can attack C all day using one set of (IMO, entirely justified) standards, but when they are confronted with these incidents, they suddenly switch to another set of standards. Or to put it more clearly, they have a motte and bailey. Motte: "Rust can't prevent shitty programmers from writing shitty code." Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"
Preemptively: garbage collection is a collection of garbage and we would do well to rid ourselves of it. I do not consider garbage collected languages a viable option for anything that even vaguely cares about performance, and they are objectively not a viable option for kernel or firmware spaces.
That said... yes, a safer-C would be useful, and it would be nice if Rust could be that, but I don't think it can. C has too much inertia and there are too many places Rust made seemingly-arbitrary-from-the-perspective-of-C-programmers decisions that grind against C-like intuitions for a comfortable swap, and so since the "pain" of C is actually pretty darn low on a per-developer basis (even if the occasional memory safety CVE is a big problem for society) nothing short of an official Software Engineer Licensing system is going to get them to move. Sort of a tragedy of the commons problem. Try again, but be more like C. Maybe then.
My hot take is that too many programmers use garbage collection as a crutch. GCs free you from some very specific work having to do with allocating and freeing memory but they are not a "get out of jail free" card for ever thinking about memory management or object lifetime again. Can think of a lot of examples of my own work in C# where people write inefficient code in hot paths without worrying about it because they let the garbage collector clean up after them.
More options
Context Copy link
Fil-C exists and is probably the closest thing to a safer C.
More options
Context Copy link
There have been plenty of hard real time systems and operating systems using GC.
Common Lisp can dominate benchmarks (over C and Fortran) but often gets kicked out, because they say e.g. in-lining assembly doesn't count even though the CL programmer generates and optimizes that assembly from the REPL (emitting it via compile time macros or such). I've worked on CL HFT systems (n.b. since ~2017 the field's not looked anything like the popular world things, because of regulatory and policy changes.)
APL or BQN are also great and can write compilers at competitive performance.
Various Forths offer different memory management paradigms to C with more safety and reliability (e.g. the
ALOTword). Indeed, the preferred way is for everything to run on the stack alone.There has been better than C for longer than we've lived. That e.g. Lisp required a dozen mb of ram caused cost issues some decades ago, but now that it's cheap...
Anyway, modern C++ memory management's closer to Rust than C, Swift has some nice innovations too. Many things can be done - the OS could even manage it for the program. Research has shown how GC can theoretically surpass manual memory management - and today GCs are faster already, just look at runtime and wall clock time. The developer today chooses when to trade latency for throughput and wall clock speed.
More options
Context Copy link
Bit Hyperbolic no ?
I'd say the opposite. GC languages are only unviable for systems that care about exceptional performance.
Quant trading works with GCs. ML & gaming have a unique preference for C++ because of the ecosystem, so I'll treat them as exceptions. Google uses Go for large scale systems (not the core, but pretty much everything else). Clearly it's good enough for most systems work.
I was writing some code to optimise within constraints - basically just a massive pile of nested loops and if statements. It did well so we ported it to production, rewriting everything in C++.
The result was literally hundreds, maybe thousands of times faster. It went from being something that ran with a visible delay to something I could run in real-time.
Fair enough. I work with numeric data. Most loops get vectorized as part of the numeric processing packages I'm using. I can imagine there are situations where nested loops can't be avoided on the critical path, and that causes a lot of pain.
More options
Context Copy link
More options
Context Copy link
I worked at a startup that was having huge problems with their server responsiveness due to Go's GC. They unfortunately believed the hype that it was a fancy concurrent GC with very small stop-the-world pauses. What they didn't realize was that while the GC was running, many common memory-accessing operations slowed down 10x (due to e.g. memory fences to avoid screwing with the GC). The slow performance would snowball, and you'd end up with the server spending most of its time in an unacceptable high-latency state.
We did manage to get some good performance out of Go eventually, but it was by explicitly writing our code to bypass the GC (including managing our own memory arena). TBH I like Go in general, but I think you underestimate just how costly a GCed language, even with a modern fancy GC, can be.
More options
Context Copy link
Quants care about latency, yes, but they're more than happy to throw a bit more hardware at their problems.
I can see I'll have to be more specific about what I take "performance" to mean. Performance is... efficiency. How much time, how many CPU clock cycles, how much memory, how many watts do you use while performing your task? Latency is one slice of it - a poorly written program will have poor performance on multiple dimensions, including latency - but low-latency alone is not the whole picture. A data center would likely not be happy to know you've reduced their latency at the cost of a large increase in power draw - power and cooling are a major factor in their operations! For game consoles, the hardware is fixed. If you take more compute than the console has to give you to get the next frame ready, your performance is poor. On any platform, if you use more memory than is available, everything suffers as you swap out to disk.
If your overriding concern is latency, to the exclusion of other performance concerns, I guess I can soften to say that GC may be workable.
More options
Context Copy link
More options
Context Copy link
Echoing @Imaginary_Knowledge but on a different tangent, in terms of garbage collection and high performance, the exception is obviously Jane Street with OCaml. Now is this the exception that breaks or prove the rule, I think only the long arm of history would be able to discern.
More options
Context Copy link
Obsolete take.
Have you looked at a modern GC like ZGC? We're talking sub-millisecond pause times now. GC performance isn't a practical problem anymore. You're repeating obsolete 20 year old memes.
Ever use an Android phone? Plenty fast UI. Android is built on Java, and it has a GC. Works fine, even at pretty low levels of the framework stack.
I'm convinced we could push a modern GC to the kernel and it would work fine there too. (RCU is already a beast of complexity and nondeterministic timing and nobody complains about that.)
Please update your prejudices to reject the current state of technology.
Also look at the performance improvements that Microsoft announces with every new version of .NET. Where speed is absolutely critical, there is still usually no beating C/C++/Rust, but C# has become blazing fast compared to how it used to be, and is actually competitive with lower level languages in some cases.
More options
Context Copy link
Gladly!
But more seriously, low latency isn't the whole picture. If I care about performance, why would I have so much spare CPU time laying around that I can essentially pin an entire core to be the GC manager?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link