site banner

Culture War Roundup for the week of December 15, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

In the beginning, the C programming language was created, and there was much rejoicing. C is perhaps the single most influential language in the history of computing. It was "close to the hardware"*, it was fast*, it could do literally everything*. *Yes, I am simplifying a lot here.

But there were a few flaws. The programmer had to manage all the memory by himself, and that led to numerous security vulnerabilities in applications everywhere. Sometimes hackers exploited these vulnerabilities to the tune of several million dollars. This was bad.

But it's not like managing memory is particularly hard. It's just that with complex codebases, it's easy to miss a pointer dereference, or forget that you freed something, somewhere in potentially a million lines of code. So the greybeards said "lol git gud, just don't make mistakes."

The enlightened ones did not take this for an answer. They knew that the programmer shouldn't be burdened with micromanaging the details of memory, especially when security is at stake. Why is he allowed to call malloc without calling free?* The compiler should force him to do so. Better yet, the compiler can check the entire program for memory errors and refuse to compile, before a single unsafe line of code is ever run. *Actually memory leaks aren't usually security issues but I'm glossing over this because this post is already long.

They had discovered something profound: Absent external forces, the programmer will be lazy and choose the path of least resistance. And they created a language based on this principle. In C, you may get away with not checking the return value of a function that could error. In Rust, that is completely unacceptable and will make the compiler cry. The path of least resistance in C is to do nothing, while the path of least resistance in Rust is to handle the error.

That's what makes Rust a better programming language. And I have to agree with the zealots, they are right on this.

...So I have to be disappointed when they're not.

Rust seems to keep popping up in the news in the past couple of months. In November, a bug in Rust code deployed by Cloudflare took down their infrastructure, and half the Internet with it. (Why Cloudflare even has a monopoly on half the Internet is a controversial topic for another time.) The cause? A programmer didn't handle the error from a function.

Well that's technically not true, they did. It's just that calling .unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.

This month, a CVE was filed in the Rust part of the Linux kernel, and it turned out to be a memory corruption vulnerability, ironically enough. "But how could this happen?" Rust has these things called unsafe blocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything in unsafe if you get tired of fighting the borrow checker.

I hear the same pitch all the time from Rust advocates. "C is unsafe, programmers are too fallible, we must use a language that forces good code." They consistently blame the language, and don't blame the programmer. So how did they react to the above incidents? Did they blame the programmer, or the language?

"Oh, you just shouldn't use unwrap like that." "Duh, don't use unsafe, it's obviously unsafe." Sound familiar? They're blaming the programmer. Even Null of Kiwi Farms had this take on his podcast.

If I was one of them, I would throw my hands up and admit that the language didn't have guardrails to prevent this, so if I would blame C in a universe where the incidents happened in equivalent C code, then I should blame Rust here. But then, I wouldn't be a Rust zealot. I'd just be a Rust kinda-supporter. I'd have to carefully consider the nuances of the language and take into account various factors before forming an opinion. Oh no, the horror! And if I went the other way and blamed the programmer, it wouldn't be long before I'd have this nagging feeling that I'm just like a C-nile greybeard, telling the programmers to git gud, and at that point, there seems to be less of a point to using Rust if we just assume that programmers are infallible.

It's a Catch-22, in other words.

To be clear, I'm not saying that these incidents alone mean Rust is a bad choice for anything, ever. I'm not saying Cloudflare or Linux shouldn't use Rust. I'm not telling people what they should or shouldn't use. I'm just pointing out the double standards. Rust people can attack C all day using one set of (IMO, entirely justified) standards, but when they are confronted with these incidents, they suddenly switch to another set of standards. Or to put it more clearly, they have a motte and bailey. Motte: "Rust can't prevent shitty programmers from writing shitty code." Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"

Rust is completely unsuited for systems-level programming. It was designed to build web browsers, and it's great at that. But it simply does not have the ability to properly manage memory, and all its idioms encourage wasteful allocations and superfluous copies.

The addition of rust to Linux was done by entriests, and will degrade the quality and readability of code. (Monolithic) kernels should be written in a single language.

That is an interesting opinion I have never encountered before. What makes Rust unable to "properly manage memory" in a way that C can? Or did you have another language in mind when you wrote this? You know you can just malloc and shuffle pointers around in Rust just like C right?

I don't use Rust, but I'm going to defend it in this case. In fact, I'll go further and defend the "buggy" code in the Cloudflare incident. If your code is heavily configurable, and you can't load your config, what else are you supposed to do? The same thing is true if you can't connect to your (required) DB, allocate (required) memory, etc. Sometimes you just need to die, loudly, so that someone can come in and fix the problem. IME, the worst messes come not from programs cleanly dying, but from them taking a mortal wound and then limping along, making a horrific mess of things in the process.

One can certainly criticize the code for not having a nicer error message. Maybe Rust is to blame for that, at least? Does unwrap not have a way to provide an error string? Although, any engineer should see what's going on from one look at the offending line, so I doubt it would make that much of a difference. It's not reasonable to blame a language for letting coders deliberately crash the program, either.

IMO, the code itself is fine. The problem is that they deployed a new config to the entire internet all at once without checking that it even loads. THAT is baffling.

In defense of Rust:

  1. Rust was created in a world with C/C++, it has to account for existing developers, existing workflows, existing code, existing bugs (that are now features), etc.
  2. Reality is inherently unsafe. Anything Rust relies on is unsafe from the perspective of Rust. Like even if all of technology is built from the ground up in only safe rust, well, pesky cosmic rays get in the way and flip a bit somewhere.
  3. Because of 1 and 2, developers need to do unsafe stuff, and like always, it's git gud time.
  4. The guarantee of Rust from my perspective is that the search radius is reduced. Critical bugs like this and the Cloudflare incident happens in and around unsafe/unwrap. This bumps up the chances of bugs being caught before code is even introduced, during review, or even when it gets through, it's easier to find out where.
  5. I think more stats is needed. Let's take the same time period as Rust has been in the kernel, how many lines of C code was added vs how many lines of Rust code was added. Let's compare how many CVEs were introduced by the new C code and the new Rust code. If I have to make a bet, I would bet on Rust.

I find criticisms that Rust is not good for exploratory work (data analysis, game development, scripting, etc.) much more persuasive, but then that just goes back to "find the right tools for the right job".

If there is a cultural war element to this, I think broadly people are yet again conflating their distaste of the tool (Rust/gun) with their distaste of the users (Rust community/gun owners).

And maybe a greater technology story of the usual people thinks !new_thing will solve all their problems, but actually !new_thing will only solve most problems and the remaining ones are the really complex ones (leading to a paradox of automation). And then certain people become cynical and disappointed and retreat to their old tools when others younger and newer people just adopts and proliferate the use of the !new_thing and then someday the cynical people wake up and found they missed the boat.

Preemptively: garbage collection is a collection of garbage and we would do well to rid ourselves of it. I do not consider garbage collected languages a viable option for anything that even vaguely cares about performance, and they are objectively not a viable option for kernel or firmware spaces.

That said... yes, a safer-C would be useful, and it would be nice if Rust could be that, but I don't think it can. C has too much inertia and there are too many places Rust made seemingly-arbitrary-from-the-perspective-of-C-programmers decisions that grind against C-like intuitions for a comfortable swap, and so since the "pain" of C is actually pretty darn low on a per-developer basis (even if the occasional memory safety CVE is a big problem for society) nothing short of an official Software Engineer Licensing system is going to get them to move. Sort of a tragedy of the commons problem. Try again, but be more like C. Maybe then.

My hot take is that too many programmers use garbage collection as a crutch. GCs free you from some very specific work having to do with allocating and freeing memory but they are not a "get out of jail free" card for ever thinking about memory management or object lifetime again. Can think of a lot of examples of my own work in C# where people write inefficient code in hot paths without worrying about it because they let the garbage collector clean up after them.

Fil-C exists and is probably the closest thing to a safer C.

viable option for anything that even vaguely cares about performance

There have been plenty of hard real time systems and operating systems using GC.

Common Lisp can dominate benchmarks (over C and Fortran) but often gets kicked out, because they say e.g. in-lining assembly doesn't count even though the CL programmer generates and optimizes that assembly from the REPL (emitting it via compile time macros or such). I've worked on CL HFT systems (n.b. since ~2017 the field's not looked anything like the popular world things, because of regulatory and policy changes.)

APL or BQN are also great and can write compilers at competitive performance.

Various Forths offer different memory management paradigms to C with more safety and reliability (e.g. the ALOT word). Indeed, the preferred way is for everything to run on the stack alone.

There has been better than C for longer than we've lived. That e.g. Lisp required a dozen mb of ram caused cost issues some decades ago, but now that it's cheap...

Anyway, modern C++ memory management's closer to Rust than C, Swift has some nice innovations too. Many things can be done - the OS could even manage it for the program. Research has shown how GC can theoretically surpass manual memory management - and today GCs are faster already, just look at runtime and wall clock time. The developer today chooses when to trade latency for throughput and wall clock speed.

I've worked on CL HFT systems (n.b. since ~2017 the field's not looked anything like the popular world things, because of regulatory and policy changes.)

If you do an effortpost on your experience here, I'll find a way to compensate you.

Not much to say. I can't effort post but here's some rambling:

I don't know much math (and learned most in the last couple years) but architecting nice trade execution lets you do a lot of things; a good trade or correct insider knowledge is meaningless if you don't know how to isolate the opportunity from risks unrelated to what you want exposure to.

The systems themselves had a lot of inline assembly. Lisps all have some disasm function giving a function's assembly, which you can improve and inline for easy high performance (or e.g. dynamically change). The architecture's were all OOP (i.e. moving hashmaps of data and assembly functions). In the same way a class can remove a level of if nesting, looking up particular fields in the object/map like "exit-inventory-below-optimal" or "exit-inventory-above-optimal" saves time, and those are all precalculated. They all used event sampling instead of time sampling. There were different models according to situation e.g. news can push the market to bimodal distributions around a new level, which governed particular data representation - laid out to encode decisions. Linear trend channels, vol compression breakout, support/resistance breakouts and trend change when linear trend channels break were insightful. I learned to write trading agents for each strategy (with an agent for each slight change e.g. for every .1% difference in stop loss) and all agents issuing internal orders, combined (e.g. some agents sell and others buy, canceling out) and then executed (Alan Dunne talks about "ensembles"). (I now only make a few trades a year with 2-5 year time horizons, so the agents' "votes" are weighted by success in the current and various other regimes, and they're working on various valuation schemes. Also log scale helps, because markets move return space.)

Sampling is hard and important, since you need to choose data representations/current distributions/regimes etc. I like additive swarm systems. There was cool signal processing stuff for feedback control which I didn't understand, but which govern when to turn off (groups of) agents according to market stress and risk exposure. If you structure everything right, you'd have most computational power constantly rebuilding 100gb of hash maps and while the main loop does 2-3 look ups per agent on an event, everything on some group of correlated assets (like 5 gold mining companies in the same geography). (N.b. ensembles decorrelate things, different agents just with different stop losses have distinct return profiles even if only trading Brent.) Event sampling means if 20 things happen in an hour, but then 40 happen in 5 mins, and you're sampling every 10th thing... You'll have a lot going on during a little clock time on the spikes, hence precomputing things. Systemic indicators are driven by moving "windows" of data, whose updates are all recursively adjusted in the agent swarm. Remember, missing trades is fine but making bad ones is bad - so you'd have a more dedicated update loop for positions you're holding.

But everyone improved order execution, some HFT firms like Virtu shifted to providing order execution as a service. This is why IEX remains a small player.

Nowadays, off exchange trading/dark pools have similar volume to exchanges and while they're actually valuing assets, most can't see those transactions, which reduces overall price discovery. Far worse, passive inflows into indexes make up most exchange volume, which kills price discovery. You can do really nice things looking at the many thousands of stocks which have literally no analysts looking. (The investable world has really shrunk since the 70s, less quality markets (e.g. African and South American governments undertook awful policy so everyone left) and less publicly traded companies) but even the S&P 600 barely gets attention.)

Munger: "Investing is the only profession where inactivity is a competitive advantage."

viable option for anything that even vaguely cares about performance

Bit Hyperbolic no ?

I'd say the opposite. GC languages are only unviable for systems that care about exceptional performance.

Quant trading works with GCs. ML & gaming have a unique preference for C++ because of the ecosystem, so I'll treat them as exceptions. Google uses Go for large scale systems (not the core, but pretty much everything else). Clearly it's good enough for most systems work.

I was writing some code to optimise within constraints - basically just a massive pile of nested loops and if statements. It did well so we ported it to production, rewriting everything in C++.

The result was literally hundreds, maybe thousands of times faster. It went from being something that ran with a visible delay to something I could run in real-time.

Fair enough. I work with numeric data. Most loops get vectorized as part of the numeric processing packages I'm using. I can imagine there are situations where nested loops can't be avoided on the critical path, and that causes a lot of pain.

Plus those numeric processing packages will almost certainly be using C or C++ under the hood for speed, because base Python is just far too slow when processing primitives.

I worked at a startup that was having huge problems with their server responsiveness due to Go's GC. They unfortunately believed the hype that it was a fancy concurrent GC with very small stop-the-world pauses. What they didn't realize was that while the GC was running, many common memory-accessing operations slowed down 10x (due to e.g. memory fences to avoid screwing with the GC). The slow performance would snowball, and you'd end up with the server spending most of its time in an unacceptable high-latency state.

We did manage to get some good performance out of Go eventually, but it was by explicitly writing our code to bypass the GC (including managing our own memory arena). TBH I like Go in general, but I think you underestimate just how costly a GCed language, even with a modern fancy GC, can be.

Quants care about latency, yes, but they're more than happy to throw a bit more hardware at their problems.

I can see I'll have to be more specific about what I take "performance" to mean. Performance is... efficiency. How much time, how many CPU clock cycles, how much memory, how many watts do you use while performing your task? Latency is one slice of it - a poorly written program will have poor performance on multiple dimensions, including latency - but low-latency alone is not the whole picture. A data center would likely not be happy to know you've reduced their latency at the cost of a large increase in power draw - power and cooling are a major factor in their operations! For game consoles, the hardware is fixed. If you take more compute than the console has to give you to get the next frame ready, your performance is poor. On any platform, if you use more memory than is available, everything suffers as you swap out to disk.

If your overriding concern is latency, to the exclusion of other performance concerns, I guess I can soften to say that GC may be workable.

Echoing @Imaginary_Knowledge but on a different tangent, in terms of garbage collection and high performance, the exception is obviously Jane Street with OCaml. Now is this the exception that breaks or prove the rule, I think only the long arm of history would be able to discern.

Obsolete take.

Have you looked at a modern GC like ZGC? We're talking sub-millisecond pause times now. GC performance isn't a practical problem anymore. You're repeating obsolete 20 year old memes.

Ever use an Android phone? Plenty fast UI. Android is built on Java, and it has a GC. Works fine, even at pretty low levels of the framework stack.

I'm convinced we could push a modern GC to the kernel and it would work fine there too. (RCU is already a beast of complexity and nondeterministic timing and nobody complains about that.)

Please update your prejudices to reject the current state of technology.

Also look at the performance improvements that Microsoft announces with every new version of .NET. Where speed is absolutely critical, there is still usually no beating C/C++/Rust, but C# has become blazing fast compared to how it used to be, and is actually competitive with lower level languages in some cases.

Please update your prejudices to reject the current state of technology.

Gladly!

But more seriously, low latency isn't the whole picture. If I care about performance, why would I have so much spare CPU time laying around that I can essentially pin an entire core to be the GC manager?

God damned it. Something about this forum curses me with typos.

Rust is an interesting programming language, because it perfected the nanny-state compiler. Rust is infamously difficult to get to compile if you don’t know what you’re doing. You can spam .unwrap() and unsafe and write unsafe code, but it requires you to at least actively choose to accept these flaws as opposed to passively letting them by accidentally.

If AI is going to write code, I think Rust is actually going to point the way toward the future. AI can make writing code very easy but introduces all sorts of potential zero-day bugs and faults. Rust actually solves much of this because many bugs the AI could write in other languages are not even valid Rust. The future of programming languages belongs to whoever develops an even more restrictive and advanced compiler that eliminates whole categories of AI errors from running. (A superset of python or typescript would be very appealing here.)

superset of python

You mean the language that is de facto completely untyped in the real world and does next to no checks on the code before trying to execute it?

That’s why you would invent a superset, to add type-checks and other context checks. You throw all your existing python that your non-technical professors and data scientists and math majors worked out. Then as you generate new “scrython” code you are reasonably confident it isn’t creating more problems to solve later. That code will be rigorously defined and checked by a linter which will constrain the universe of possible AI errors.

I don’t think this is the only way to add guardrails around AI but eventually someone will have to do something like this. The sheer volume of python written and being written means AI will be asked to write python for a long time to come.

Python is going through a devx revolution right now. Pydantic, Astral and Mojo are the main contributors.

Mojo is typed, compiled and a (claimed) super set of Python. It hasn't seen as much adoption, but has is led by systems Jesus - Chris Lattner. I'm hopeful it will get there eventually.

Astral on the other hand, has transformed the python dev workflow. 'uv' solved python packaging. 'ruff' solved linting and formatting and now 'ty' solves python type-checking. Separately, Pydantic allows data objects to be strictly typed and is pretty much a python built in.

And I know it's customary to throw a bunch of half-baked tools at someone to silence criticism about a language. For years, that was true for python. But no, these tools have genuinely become ubiquitous. The python code-base at my current job is pretty much strictly typed.

In a few years, I'm betting python will become a pleasant language to use.

Pydantic is regularly used, but what about Astra? Are you using astral yourself? Is it in any major open-source projects?

I’ve never seen anyone do package management that wasn’t pip (or conda/apt depending on environment).

Open to it, I’ve just never seen it in the wild.

I have to interact with some Python code at work once in a while, and the mentioned Astral tools make me slightly happy to touch Python for the first time in my life. It got adapted very quickly in my company.

Yes ! Probably the fastest I've seen a set of tools be adopted. It's the gold standard now.

In my new job and old job, we used both uv and ruff. The move to uv took a bit longer in the new job because it involved changes across 1000+ engineers. But it got done. Ruff integration in both cases was trivial.

uv was transformational. It is a great tool, yes. But a big part of it had to do with the dire state of python packaging it replaced. Another part of it had to do with the drop-in nature of it. The porting experience gave me a ton of joy.

ruff is great. It primarily solves annoyances. Some people still use flake8, black, isort and 20 other tools. But most greenfield projects are starting with ruff. But now that ruff is popular, you can share and steal complex linting/formatting logic in public to make it more powerful.

ty is new. Technically still in beta. We use based-pyright which is also new. It's stable and works. But we only run based-pyright as a pre-commit hook. ty is 10x faster, so once it is stable, we will be able to run it aggressively on saves. We've tried ty internally and senior devx people are excited. But, we're waiting for it to reach 1.x major version before making the port. Majority of python repos either don't have type checking or use mypy which is about 50x slower and annoying to use. So most team should see a bigger improvement than what we'd experience.

If I to guess, astral wants to work their way up to a JIT compiler for python (like pypy). If the linter and type-checker can enforce strong code behaviors then a JIT compiler should technically be build-able for python. But, the future is any one's guess.

Between you, @Pasha and @ChickenOverlord that's a pretty positive response. I guess I have some new tools to learn :)

Python is still shit, but less shit than it was. And still less shit than JavaScript, of course.

Uv has only been available for a year or two, but it is being adopted extremely quickly (because pip was just that bad): https://wagtail.org/blog/uv-overtakes-pip-in-ci/

Python also lets you commit crimes against humanity and good taste like this with ease: https://www.hillelwayne.com/post/python-abc/

C-nile greybeard

Worth reading the post just for this pun.

I don't know. I find this a topic that it's pretty easy to be nuanced about. Different languages attempt to provide different guarantees to the programmer during their operation. To provide those guarantees they have to be able to understand the code and prove the code satisfies those guarantees. Most such languages provide ways to disable checking those guarantees for particular code sections on the assumption that you, the programmer, have information the compiler lacks that things will work without the compiler having to check. If you, the programmer, tell the compiler you know better and then turn out to be wrong I think it's fine to blame the programmer.

I think everyone has, in their mind, a different idea about the extent to which buggy code should be caught by the compiler and these ideas are what inform what side of the blame the programmer/blame the compiler distinction you fall on. As an example: In college a friend and I had to write some networking libraries in C. At the time we didn't use any fancy editors or anything, just good old gedit and gcc. My friend was writing a function that was supposed to perform an arithmetic operation and return the output but every time he ran it he got a different (implausible) result, even with the same inputs. What was happening is that he had accidentally omitted the return statement for his function, so he was getting back some random garbage from memory on every run. Should the C compiler let you declare a function that returns a value and then let you omit the return statement? Is that mistake your fault or the language's fault? Formally doing this is undefined behavior but that does not always mean crash!

Well that's technically not true, they did. It's just that calling .unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.

In this case I find the behavior of Option<T>.unwrap() unintuitive, but I am also coming from the perspective of exception-based error handling. As an analogy, C#'s Nullable<T>.Value will throw an exception if the nullable is actually null. That option obviously isn't available in a no-exception world. Maybe the default behavior should be more like the behavior with the try trait such that it returns the error instead of panic? Then let the programmer panic if the value is error, although that introduces another layer of error checking!

This month, a CVE was filed in the Rust part of the Linux kernel, and it turned out to be a memory corruption vulnerability, ironically enough. "But how could this happen?" Rust has these things called unsafe blocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything in unsafe if you get tired of fighting the borrow checker.

I'm a little unsure of the criticism here of Rust as a language. Is it that unsafe exists? Presumably all the code that is not in an unsafe block has guarantees that equivalent C code would not. Is that not a benefit? Is the worst case here you wrap all your Rust code in unsafe and then you end up... as good as C?

To be clear, I'm not saying that these incidents alone mean Rust is a bad choice for anything, ever. I'm not saying Cloudflare or Linux shouldn't use Rust. I'm not telling people what they should or shouldn't use. I'm just pointing out the double standards. Rust people can attack C all day using one set of (IMO, entirely justified) standards, but when they are confronted with these incidents, they suddenly switch to another set of standards. Or to put it more clearly, they have a motte and bailey. Motte: "Rust can't prevent shitty programmers from writing shitty code." Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"

I think there is a more productive discussion here about how language features and guarantees can help protect against writing buggy code and potentially making it easier to review code for bugs. I suppose I think of it by analogy to Typescript and Javascript. All Javascript is valid Typescript but Typescript needs to be compiled to Javascript. That compilation, in my experience, helps avoid whole classes of errors due to the lack of typing in Javascript. Sure you can write Javascript that just doesn't have those errors, and most people do, but Typescript renders them inexpressible. Similarly so for C and (non-unsafe) Rust.

Should the C compiler let you declare a function that returns a value and then let you omit the return statement? Is that mistake your fault or the language's fault? Formally doing this is undefined behavior but that does not always mean crash!

It's the language's fault (that probably should never have been allowed by the standard, and if it wasn't then the compiler could catch it by default) and it's your fault (you shouldn't have written that), and it's other language users' fault.

That third one might take a bit of explanation.

Any decent compiler these days will warn you about that error at compile time, and will stop the compilation if you use a flag like -Werror to turn warnings into compile-time errors. So just always use -Werror, right? We could all be writing a safer version of C without even having to change the C standard! Well, "look for functions that declared a return value but didn't return one" is an especially easy error for a compiler to catch, but there are others that are trickier but more subtle. Maybe you add -Wall to get another batch of warnings, and -Wextra with another batch, and you throw in -Wshadow and -Wunused-value and -Wcast-qual and -Wlogical-op and ... well, that's a great way to write your code, right up until you have to #include someone else's code. At some point your OCD attention to detail will exceed that of the third-party authors who wrote one of your libraries, and you can't always fault them for it (these warnings are often for code that looks wrong, whether or not it is wrong - even omitting a return statement could probably save one CPU cycle in cases where you knew the return value wasn't going to be used!). So, I have special headers now: one to throw a bunch of compiler pragmas before #include of certain third-party headers, to turn off my more paranoid warning settings before they can hit false positives, then another to turn all the warnings back on again for my own code, like a primitive version of "unsafe".

I was once paid to port C code from a system that allowed code to dereference null pointers (by just making the MMU allow that memory page and filling it with zeroes). And so the C code written for that system used that behavior, depending on foo = *bar; to set foo to 0 in cases where they should have written foo = bar ? *bar : 0; instead. As soon as you give people too much leeway, someone will use it, and from that point onward you're a bit stuck, unable to take back that leeway without breaking things for those users. I like the "nasal demons" joke about what a compiler is allowed to do when you write Undefined Behavior, but really the worst thing a compiler is allowed to do with UB is to do exactly what you expected it to, because then you think you're fine right up until the point where suddenly you're not.

This is getting off topic, but I thoroughly enjoy reading Raymond Chen's blog Old New Thing for the many stories of Windows bugs or implementation details or programmer misuses that later became compatibility constraints. When you upgrade your operating system and your Favorite Program stops working people rarely blame their Favorite Program even if it is the thing that was doing something unsupported!

https://xkcd.com/1172/

Or as a modern sage once explained:

On Sun, Dec 23, 2012 at 6:08 AM, Mauro Carvalho Chehab mchehab@redhat.com wrote:

Are you saying that pulseaudio is entering on some weird loop if the returned value is not -EINVAL? That seems a bug at pulseaudio.

Mauro, SHUT THE FUCK UP!

It's a bug alright - in the kernel. How long have you been a maintainer? And you still haven't learnt the first rule of kernel maintenance?

If a change results in user programs breaking, it's a bug in the kernel. We never EVER blame the user programs. How hard can this be to understand?

To make matters worse, commit f0ed2ce840b3 is clearly total and utter CRAP even if it didn't break applications. ENOENT is not a valid error return from an ioctl. Never has been, never will be. ENOENT means "No such file and directory", and is for path operations. ioctl's are done on files that have already been opened, there's no way in hell that ENOENT would ever be valid.

So, on a first glance, this doesn't sound like a regression, but, instead, it looks tha pulseaudio/tumbleweed has some serious bugs and/or regressions.

Shut up, Mauro. And I don't ever want to hear that kind of obvious garbage and idiocy from a kernel maintainer again. Seriously.

I'd wait for Rafael's patch to go through you, but I have another error report in my mailbox of all KDE media applications being broken by v3.8-rc1, and I bet it's the same kernel bug. And you've shown yourself to not be competent in this issue, so I'll apply it directly and immediately myself.

WE DO NOT BREAK USERSPACE!

Seriously. How hard is this rule to understand? We particularly don't break user space with TOTAL CRAP. I'm angry, because your whole email was so horribly wrong, and the patch that broke things was so obviously crap. The whole patch is incredibly broken shit. It adds an insane error code (ENOENT), and then because it's so insane, it adds a few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").

The fact that you then try to make excuses for breaking user space, and blaming some external program that used to work, is just shameful. It's not how we work.

Fix your f*cking "compliance tool", because it is obviously broken. And fix your approach to kernel programming.

It's a little interesting to contrast this with my perception of Chen's attitude. He clearly was dedicated to making sure software that used to work would continue to work for users. It is basically never the software users fault that the program they bought did things wrong. On the other hand, he has palpable contempt for the developers of user-mode software that took a dependency on some undefined or non-contractual behavior and created these compatibility constraints. Ex: Application compatibility layers are there for the customer, not for the program

Some time ago, a customer asked this curious question (paraphrased, as always):

Hi, we have a program that was originally designed for Windows XP and Windows Server 2003, but we found that it runs into difficulties on Windows Vista. We’ve found that if we set the program into Windows XP compatibility mode, then the program runs fine on Windows Vista. What changes do we need to make to our installer so that when the user runs it on Windows Vista, it automatically runs in Windows XP compatibility mode?

Don’t touch that knob; the knob is there for the customer, not for the program. And it’s there to clean up after your mistakes, not to let you hide behind them.

It’s like saying, “I normally toss my garbage on the sidewalk in front of the pet store, and every morning, when they open up, somebody sweeps up the garbage and tosses it into the trash. But the pet store isn’t open on Sundays, so on Sundays, the garbage just sits there. How can I get the pet store to open on Sundays, too?”

The correct thing to do is to figure out what your program is doing wrong and fix it. You can use the Application Compatibility Toolkit to see all of the fixes that go into the Windows XP compatibility layer, then apply them one at a time until you find the one that gets your program running again. For example, if you find that your program runs fine once you apply the VersionLie shim, then go and fix your program’s operating system version checks.

But don’t keep throwing garbage on the street.

I don't think they're really different attitudes. The things that got developers in trouble on the Windows side was broken code (in the sense of something like use-after-free) or use of undocumented code/code that wasn't part of the API contract. So when stuff that was outside of the API contract changed behavior, programs that were violating the API contract broke and that was the sort of stuff the compatibility code on the Windows side had to deal with. On the Linux kernel side, Linus considers everything exposed to userspace to be part of the contract, and anything that changes behavior in a way that breaks userspace is a violation of the contract from the kernel side.

I wonder what fraction of The Motte is software people.

I'd guess 30 to 50 percent

I was once paid to port C code from a system that allowed code to dereference null pointers (by just making the MMU allow that memory page and filling it with zeroes). And so the C code written for that system used that behavior, depending on foo = *bar;

AIX did this. I think the first three values were 0, 0xdeadbeef, 0xbadfca11. C programmers weren't supposed to depend on it -- the compiler would use it to avoid short circuiting expressions like:
myptr == NULL || (*myptr == whatever)

which would save branch overhead. And the very common
myptr == NULL || *myptr == 0

could skip the null test entirely.

But I'm sure some programers did depend on it.

I argue that this isn't a better outcome than what would have happened in C, which would also be to crash.

This is both normatively and positively wrong.

Positively: in C, Undefined Behavior often leads to a crash, but is not actually required by the C standard to lead to a crash. The outcome is literally undefined.

Normatively: If you write code that leads to Undefined Behavior, the C compiler is allowed to and often will emit code that will crash; this is the same outcome as the Rust case, but is still a worse situation because grep unwrap is a thing and grep some_regex_catching_all_C_UB is (despite linter developers trying their best) only a dream. The C compiler is allowed to emit code that will make demons fly out of your nose. The C compiler is allowed to, and often will, emit code that will hand control of your computer to the botnet of whichever attacker first discovered how to trigger the UB, at which point if you're lucky your computer is now laundering your electric bill into some mafioso's bitcoin wallet at pennies on the dollar, and if you're unlucky your computer is now an accessory to DDOS attacks or blackmail or financial scams. These are much worse outcomes. Even CloudFlare crashing is much better than CloudFlare being compromised would have been.

Bailey: "C is unsafe, because of all the memory unsafe code people have written, and we should rewrite everything in Rust to fix all of it!"

The second clause here is false IMHO (though bias makes MO very H: I've been writing a little C and a lot of C++ for 3 decades and have no current plans to stop), but the first clause is simply theoretically and empirically true and belongs in the motte.

I do wish the second clause was true, for some language if not necessarily Rust, because I have about a hundred other gripes with C/C++ that can probably only be fixed by someone starting from scratch ... but whenever I investigate a new language that I'm excited to see fixes flaw X, they seem to do it at the same time as they omit all possible support for features Y and Z and end up with something worse (for some of my purposes; there are three other languages I write in for different use cases) overall.

Yeah, my ideal modern language would be a curated version of C++. It'd have a modern package manager, cut out a ton of the language features that are outdated or dangerous or both, and rewrite some existing features (e.g. lambda functions) to be less clunky now that backwards compatibility isn't a problem.

But making something like this wouldn't be very sexy.

I'm a SWE that's never worked with Rust (I've mostly been in R/Python, then SQL/Java/C#). I feel like with the advent of LLMs, the choice of programming languages will be so much less important in the near future. Before LLMs, having to learn a new language imposed a lot of costs in how to do the basic stuff, as having 10+ years of experience in a language means you can bust out features much more quickly than someone who has to constantly go to StackOverflow to figure out how to do boilerplate stuff. I feel like a lot of the debates over languages was really just "please don't make me learn this new crap", with people having their preferred language and then actively searching for reasons to defend it. Now you can just have Claude Code easily do boilerplate in any language for you, and focus on testing things instead. I'm converting old SQR code into SQL now, and pre-LLM this would have required me to have at least a basic knowledge of SQR, but that's no longer really the case.

At least today, LLMs can't produce anything which runs in any of the languages I use at work or leisure. An AI should be able to reason from a spec etc. but they're currently slaves to training data alone.

You must be working with very strange/niche languages then. I've had no trouble getting them to understand SQR and a couple other extremely old languages like FAME.

If you've got text-format spec to give them, you can kinda get them to handle esolangs to solve problems that don't exist in the normal corpus for those languages. But I haven't seen one great at it yet.

I'm converting old SQR code into SQL now, and pre-LLM this would have required me to have at least a basic knowledge of SQR, but that's no longer really the case.

Using it in that direction is fine becauae you can check the output, I'm not sure it's going to work so well in a "I'm used to language X and they're making me write in language Y" scenario.

I would have agreed with you last year, but it's getting easier and easier to ignore learning the language you're working with too. It's obviously still useful to have at least a basic understanding, but I feel we're like <10 years from just trusting LLM code output as much as we trust compiler output. Nobody reads compiler stuff any more.

Nobody reads compiler stuff any more.

Clearly you've never worked in a field that cares about performance. People absolutely do read compiler output to see if it did anything too stupid and work around such issues.

We even have fancy new tools for this like Compiler Explorer, which is great for answering "does clang vectorize this like I want it to?".

In C, you may get away with not checking the return value of a function that could error. In Rust, that is completely unacceptable and will make the compiler cry. The path of least resistance in C is to do nothing, while the path of least resistance in Rust is to handle the error.

This reminds me of learning BASIC on a Timex/Sinclair 1000, AKA the US release of the ZX81. It had this stupid gel-tab keyboard that made real typing impossible, but it would turn single keystrokes into full commands, even multiple commands per key which it would select contextually depending on where you were in the line you were typing. That and it called out syntax errors on the spot and wouldn't accept them until they were fixed.

This is all relevant to nothing, I'm just waxing nostalgic. For a machine with 2k native RAM (expandable to 16) it was an awesome kid's first computer to learn how to program out of a book.

Central planning fails again.

Every single time. This happens every time where you try to engineer around the existence of the human soul, and it will continue to happen, forever. There is a war going on between the ensouled and the enslaved, and you can see it playing out here. The enslaved, who occupy places like HR departments, CPS field agencies, reddit moderation discords, city ordinance compliance departments, HOA boards, and Rust governance bodies, fight against the idea that an ensouled human being might have their own ideas about how to live their life, or how to manage the memory on their own computer.

C is god's language, and as counterintuitive as it may sound: so is python. All other languages exist only to build a path towards enslavement.

I guess we know whose language is PowersHell. (I actually really like powershell, but I might be lawful evil)

This happens every time where you try to engineer around the existence of the human soul, and it will continue to happen, forever

Yes, some of the ensouled will find a way around whatever barriers you put in place, but at some point you still need to at least try to bend incentive structures to reduce, if not outright eliminate, murders and the like.

so is python

Can you explain why for the non-CS-minded (me)?

Python is very golden retriever coded.

Python doesn't ever error because it thinks you've made a mistake, it only stops you if it can't figure out what you are asking it to do. It does force you to use garbage collection and the language features love hashmaps but is generally very unopinionated on anything else.

PHP is even better about not stopping if it thinks you made a mistake - try to open a file that doesn't exist? Yeah, sure, fine, just return false. Loop through false? Of course, obviously that's intended behaviour. Mix and match numeric and stringy keys in an array? No problem, it's a hashmap, and it'll even sort for you.

If Python and C are the languages god wants us to use, PHP is the language he uses himself.

(In case you couldn't tell, /s).

If C and python are God's languages, then God asks us to live in caves in the desert while the sinful inherit the Earth. Yes it's holy and virtuous, but you can't build the tower of babel with it.

God knows that it is our fate to spend our lives in the darkness, and knows that we'll need a light.

C is the language which feels like it can get us out, but python is the language that will be there for us when we can't.

Hey, C combines the power of assembly with the elegance of assembly, as the joke goes.

Python has completely different problems. On the one hand, the duck typing means that erroneous assumptions about types may go undetected for a long time before blowing up in a completely innocent part of the code. (As far as a weakly typed piece of code can be innocent, that is.)

More critically, it is slow. Reading a field of an object, or calling a function defined in some global scope, both require a lookup in a hash maps, where in C they former would be pure pointer arithmetic and the latter would be resolved by the linker (or earlier) and turned into a constant runtime statement.

You can build anything with C if you're not a coward.

You can build anything out of toothpicks and tissue paper. Doesn't mean it's a good idea.

Every language in which you can say the compiler/runtime - just trust me bro I know what I am doing will devolve into just trust me bro language. This is why you don't have trust me bro sections. So it may be me - but I never really understood the point of Rust. As in why it exists. To me it seems that it combines the clumsiness of golang with the unsafety of C.

Btw - both C and C++ are quite memory safe if you don't try to be clever.

Every language in which you can say the compiler/runtime - just trust me bro I know what I am doing will devolve into just trust me bro language. This is why you don't have trust me bro sections.

There is a name for a language which does not have 'just trust me bro' sections. It is Java.

If you want to do anything interesting with hardware or squeeze out optimal performance, you will sometimes end up in situations where you are making assumptions which can not be verified by the compiler, which generally is ill-equipped to verify arbitrary mathematical proofs or parse hardware specifications.

Ideally, a language would allow you to specify hardware behavior and include a theorem verifier which you can use to prove that because two variables are co-prime per your precondition, your divisor can indeed not be zero in the next line. Instead, you have unsafe blocks.

Of course, some lazy programmers will decide that unsafe blocks are the path of least resistance. Probably when C came out, some asm programmers decided that they could code "C" by just using inline assembly for everything. If you want to protect a programmer from harming themselves, you need to place them in a safe padded cell like Java does.

The use case of Rust is when you have someone who is actually willing to work with the borrow-checker and only use unsafe in the places where that is not possible. This will make it much easier to audit the code. Imagine having to verify the stories of two suspects. Suspect "Rust" provides you ironclad, notarized evidence for 90% of his claims, while 10% (the unsafe stuff) is unsupported by evidence. Suspect "C" provides you no evidence for any claims. To make sure that their story checks out on a similar level of confidence, you would likely spend 10x as much work on subject "C" (or possibly more because the unsafe code blocks can interact.)

Btw - both C and C++ are quite memory safe if you don't try to be clever.

For C, that is a ridiculous claim. You might as well say that the Taliban regime is great for women's rights as long as the woman is willing to submit to her husband and not voice controversial opinions.

Sure, there are plenty of programs in C which are obviously sound. But not every problem is easily transformable into such a program. "Don't be clever about memory management" is not actionable advice if you need to share data with indeterminate lifetime between multiple threads any more than "try to be straight and submissive" is actionable advice for an Afghan butch lesbian.

Array accesses in C are memory unsafe as fuck. Unlike for C++ (_GLIBCXX_ASSERTIONS), the best way to do safe array indexing in C boils down to "wait for clang to implement -fbounds-safety".

Any language with reflection, dynamic types at runtime and few other travesties Java supports is trust me bro. Same with dependency injection.

For C - I haven't written in a long time, but just moving to the bound functions solves a lot of errors. Now for multithreading and passing messages and data - the answer is use erlang.

There is a name for a language which does not have 'just trust me bro' sections. It is Java.

If I had a nickel for every java.lang.NullPointerException I've seen in the wild, I'd probably have at least a few more bucks, and I don't use that many Java applications.

I would argue that while null pointer dereferencing (at least in userspace) is bad, it at least of bounded badness, because it invokes well defined behavior, just like an integer division by zero. You could say that any language with runtime errors (or even any language where you do not have to prove the correctness of your program) qualifies as 'trust me bro', but that is very distinct from a memory-unsafe language.

Central examples of memory troubles, such as use-after-free or out-of-bounds accesses are much more evil because they do not invoke well-defined failure modes. Often, they lead to arbitrary code execution.

Just to be pedantically clear (I saw your comment on the chronological page and didn't realize it was correct in context until I looked at the parent), a null pointer dereference invokes well-defined behavior of bounded badness in Java. In C, a null pointer dereference is Undefined Behavior and so is still allowed to lead to arbitrary code execution, both in theory and in practice.

You are correct.

I would still say that in C, under a few circumstances you can depend on a null pointer access crashing, e.g. if all of the following apply:
(1) You are in standard userland where nothing is normally mapped to 0.
(2) You know that your code will not be run in other settings (for example, you are not writing a library).
(3) You are not handling untrusted data.
(4) You are not in a privilege elevated mode (like in the kernel)

Then you can usually count on getting a segmentation violation for null pointer access, so that failing to do

if (!ptr)
    abort();

will be of limited badness. (Given all these caveats, it is probably less bad

Compare and contrast with another source of undefined behavior: out-of-bounds array accesses. These will typically not cause segfaults, but will instead silently corrupt program data and flow, often leading to arbitrary code execution if exploited. It is the difference between getting killed from smoking and getting killed from smoking while filling up your gas tank.

Now, you could make the point that anyone treating a segfault as a safety net (instead of ruining one's day as much as one's car airbag firing) should not be programming with raw pointers, and I might even agree. In my defense, my baseline is physicists who self-taught C while programming with ROOT, and most of whom have no business coding in any language unsafer than Python, who still happily use C arrays and for whom a segfault on every third compile is just normal.

I would still say that in C, under a few circumstances you can depend on a null pointer access crashing, e.g. if all of the following apply:

(5) Your compiler hasn't (ab)used the fact null pointer accesses are undefined behavior to "simplify" code in unexpected ways.

Eg, consider

int foo( int *p )
{
  if ( p ) function_with_side_effects();
  return *p < INT_MIN;
}

Since p is unconditionally dereferenced in the return statement which results in undefined behavior if p is null, the compiler is free to assume that it is never null and generate code that unconditionally calls function_with_side_effects() rather than inserting a branch as if ( p ) is only ever not true in the presence of undefined behavior. That is, it can treat the code as if it were instead written as

int foo( int *p )
{
  function_with_side_effects();
  return *p < INT_MIN;
}

Depending on exactly what function_with_side_effects() does, that could result in a lot of unsafe outcomes even in "unprivileged user code that never handles untrusted data". Even more fun, since *p < INT_MIN evaluates to 0 for all possible values of *p, the compiler can completely remove the dereference that you'd expect to cause a segfault, thereby treating the code as if it were written as

int foo( int *p )
{
  function_with_side_effects();
  return 0;
}

Thanks, that is a good point.

(Also, I am pretty sure that compiler developers who use undefined behavior that way to 'optimize' code will go to hell eventually, but that will not help me if I am stuck debugging such a thing.)

C# added nullable reference type annotations a few years ago, and if you're strict about actually using them (my Indian coworkers are not, sadly) then you can reliably eliminate 99% of null reference exceptions. When the C# team finally gets around to implementing discriminated unions (in the next decade or three) and we finally have Option<T> there will be actual null safety in C#.

Java does have a trust-me-bro mode. See https://developer.android.com/reference/sun/misc/Unsafe

That's Android documentation, but regular Java has the same facility.

Granted, Unsafe is being deprecated, but we'll have equivalently powerful FFI stuff.

C++ now has smart pointers and one you get the hang of them, you don't want to go back to the old way of managing memory manually. It's not about the language being intrinsically "safe" or "unsafe", but rather that it enables you to automate memory management and you don't have to think about it unless you absolutely need to. You can just have a small "just trust me bro" section instead of having the cognitive load of having to double check the entire codebase.

I spent the last 2 years of my career fighting with Rust, and I detest the language. Unfortunately, I'm aware that I'm in the minority, and a lot of people buy into the hype that it's the wave of the future. In my opinion, it's a language that was a bad idea from the start.

  • Most programs are not the Linux kernel and do not need 100% safety guarantees. It's honestly fine if a game segfaults once in a while due to C++'s scaaaary "undefined behavior" (which is a crash 99.99% of the time).

  • And Rust kinda fails at the safety guarantees in practice, anyway. People who aren't experts bash their heads against the language, trying to do simple tasks that in any other language would be done in 15 minutes, and then they finally give up and do one of two things: wrap things in an unnecessary "unsafe" block, or copy a large object that didn't need copying. It turns mediocre programmers into bad programmers. (This isn't rhetoric; in our very real project, I saw how the "unsafe" blocks kept adding up, because we had to actually deliver a product instead of spending our time justifying ourselves to the borrow checker.)

  • The cost of C++ is all the crashes that we very visibly see everywhere. The cost of garbage-collected languages is sluggishness and, often, software literally just pausing for multiple seconds to free up memory. (When you're playing a game and you see a multi-second pause in the middle of gameplay, that's why.) The cost of Rust is in code that just doesn't get written, because it's 3-5x harder to get anything practical done. And unlike the first two, this is a subtle, invisible cost that is easily overlooked.

  • The designers of Rust really wanted to make a programming language with formally-verified correctness; there are other programming languages that go all the way on this, but they're all impossible to use. In fairness, Rust is merely difficult to use rather than impossible, but that's because the borrow checker is trying to do most of the proof for you. Unfortunately, the borrow checker isn't very good at it, and fails in many common actually-safe scenarios, and they didn't include any way for you to, well, help it along. (Ok, they do have lifetime specifiers, but those are insanely ugly constructs that provide no actual value to the programmer. They're only there for you to assist the borrow checker - often in ways it should be smart enough to do itself.) Now, Rust does keep improving the borrow checker, so fortunately this improves each year, but I doubt the problem will ever really go away.

  • The thing that really, really sticks in my craw is that the language is built around a horrible "mutable" keyword that is both ugly and a lie. C++ has "const", which is fantastic and they should have just copied it and been done with it. I actually think their choice not to do that is political, so they could pretend they were innovating over C++. But now, lots of objects have to have "internal mutability" to work, which means that there's no actual way to get, say, a reference to a mutex with the guarantee that you will not modify the object it points to. And you also can't have "const" member variables, so for instance, you can't initialize a temp directory as an object's member and add an in-language guarantee that it won't change until the object is destroyed.

  • One thing that Rust does really, really, amazingly really well is type inference. And it's nice in a lot of situations, but like the "auto" keyword in C++, I think people abuse it far too much. Types should be explicitly written out in code! They're a very important part of the logic! Libraries should be encouraged to be written so that types are easily readable (not 100-character-long nested monstrosities that both C++ and Rust are guilty of). And Rust uses its type inference to hide just how ugly its design actually is, so you don't have to stare at crap like MutexGuard<Option<Arc<Map<int,Result<Option<String>,MyError>>>>>> all the time.

But oh well. I'm aware that some of this is just my biased old-man get-off-my-lawn outlook. (Also, I'm not a Rust master, so I'm sure that some of what I'm saying above is flat-out incorrect.) But I don't think I'm wrong that Rust has a lot of downsides and is not the right tool for many (or even most) purposes. Ironically, one saving grace is that it's getting popular at the same time as LLMs, which are actually pretty good at navigating its human-unfriendly intricacies. Without ChatGPT I would have been a lot less productive.

Professional Rust developer here.

First of all, congratulations on getting paid to write Rust code without actually liking it. In my experience this was quite difficult to achieve so everyone around me really likes Rust a lot.

Borrow checker is honestly one of the hot topics about Rust that always confuses me, because I almost never have problems with it apart from some very small edge cases. Meanwhile people online seem to base half their Rust experience on the borrow checker. May I ask you what sort of programming did you do?

I will fight you to death about the mut keyword. It is absolutely fantastic and it pushes you in a more functional way of programming.

which means that there's no actual way to get, say, a reference to a mutex with the guarantee that you will not modify the object it points to.

But this can be achieved easily with like a 10 line wrapper type around Mutexes in your codebase?

Anyway I used to write C code and switched to Rust without brushing much with C++ so it has been a gigantic improvement in all ways. But honestly I fail to see why anyone would prefer C++ at this point for new projects (except for gamedev where shipping something buggy and fun trumps correctness by a lot).

Ok, I guess we're getting into the weeds now. Anybody who's not a programmer, feel free to bow out. :)

Borrow checker is honestly one of the hot topics about Rust that always confuses me, because I almost never have problems with it apart from some very small edge cases. Meanwhile people online seem to base half their Rust experience on the borrow checker. May I ask you what sort of programming did you do?

Backend infra work. Efficiency was important, so we ended up having to use pointers to bypass the Rust borrow checker in cases where it just didn't work (e.g. for caching a map lookup). Concurrency was also important, and Rust's concurrency features were kind of tacked on later in its design. (Actually, Rust relies on third-party libraries to solve a LOT of its design flaws, like tokio for async stuff.) I do admit that the Send/Sync traits are nice for catching a few common concurrency bugs. But screw Rust for intentionally not implementing them for the pointer type because they want to discourage its use.

I will fight you to death about the mut keyword. It is absolutely fantastic and it pushes you in a more functional way of programming.

I don't think you should compare "const" and "mut" if you're not a C++ developer and you've never used "const" before. "const" is "absolutely fantastic" and pushes you to a "more functional way of programming" - it lets you clearly spell out what side effects all your functions and APIs have. With Rust's version of "&mut" - which despite the lying name (and horrible syntax) actually just means "exclusive reference", something that is correlated with but not the same as "mutable" - a shared non-mut object might change, it might not, you'll just have to check the implementation for details.

But this can be achieved easily with like a 10 line wrapper type around Mutexes in your codebase?

Here's a C++ object: MyStruct. Here's the read-only version of that C++ object: const MyStruct. You'll forgive me for not being impressed that this common safety pattern can be done in "like 10 lines" in Rust by implementing your own Mutex wrapper.

Anyway I used to write C code and switched to Rust without brushing much with C++ so it has been a gigantic improvement in all ways. But honestly I fail to see why anyone would prefer C++ at this point for new projects (except for gamedev where shipping something buggy and fun trumps correctness by a lot).

Yeah, don't get me wrong, C++ has plenty of issues, and I wish there was a better alternative that kept its strengths too. Rust, unfortunately, decided not to. If you're going directly from C to Rust, you might even think it's normal that you need third-party libraries to, say, put floats into a heap (heh).

EDIT: Actually, while I'm on the topic, and since you're a Rust pro, let me ask... does putting wrappers around everything feel natural to Rust developers? The type inference makes it so you don't have to see all the nested wrapping. Maybe that's just a paradigm I don't grok. Because I find Rust's heap to be spectacularly and uniquely badly designed. Rather than treating the comparator as a separate parameter from the thing being compared - like in every other language in existence - Rust instead has you create a wrapper object to override the object's comparisons. Which means you need to do type conversions for every get and put operation. And if you pull some objects out of a max-heap and you forget to convert them back, then do some comparisons on them yourself... the code (because its types are hidden) will look innocuous, but you'll get exactly the wrong result.

Backend infra work. Efficiency was important, so we ended up having to use pointers

This is wild for me. I work in HFT and almost never have to touch pointers even in cases where latency/performance is important to absurd levels. I know you said you don't work anymore, but passing pointers around threads like what you are describing is not even a good C++ practice: https://floooh.github.io/2018/06/17/handles-vs-pointers.html. Pointers are too big on 64 bit systems and will mess up your cache locality and you will often get horrible assembly because aliasing (more details on that below).

Rust relies on third-party libraries to solve a LOT of its design flaws, like tokio for async stuff

What design flaw is tokio solving? It is quite inherent in Rust that the language doesn't ship with a default runtime, so there is experimentation around this. And tokio is a really fantastic piece of software. Maybe you are thinking of libraries like async-trait? It can be annoying to find gaps like this in the language that takes years to close, but they have been going down in numbers rapidly, and it is quite nice that different solutions can be tested as macro libraries before being baked into language design. Definitely beats a committee hashing out nonsense which then cannot be changed forever because muh backwards compatibility.

With Rust's version of "&mut" - which despite the lying name (and horrible syntax) actually just means "exclusive reference"

Yes and this is fantastic design because aliasing. If you ever stared at absolutely horrible assembly because gcc couldn't prove two pieces of memory are not overlapping, you understand why this language design exists. It is not for political reasons or giving an illusion of progress over C++. Godbolt guy's recent compiler optimizations video is a decent primer: https://youtube.com/watch?v=PPJtJzT2U04?si=kZ3CFZKlzDCeSRxX. mut is much superior to const for this reason alone. Crazy that C has the reserved keyword to address this problem, but C++ doesn't even have this. (also it is a correct choice to make the default non-mut, makes reasoning about code much easier). I think you have a point about interior mutability confusing this relationship though. You win some you lose some.

Here's a C++ object: MyStruct. Here's the read-only version of that C++ object: const MyStruct. You'll forgive me for not being impressed that this common safety pattern can be done in "like 10 lines" in Rust by implementing your own Mutex wrapper.

I am not very knowledgable about this subject, but if const MyStruct contains a pointer in it, you can simply mutate the pointed data right? Chatgpt seems to think so. Then this is the exact same problem you are complaining about with Rust Refcell/Mutexes. Your const MyStruct doesn't guarantee anything at all not already guaranteed by Rust shared reference or Rc/Arc without interior mutability.

does putting wrappers around everything feel natural to Rust developers

Yes the newtype pattern is extremely standard in Rust and I do it all the time.

Which means you need to do type conversions for every get and put operation

Usually when you newtype something, you are doing so because you don't want the inside type to be accessible as it is anymore, but only through a certain limited interface. In most use cases you should be writing the newtype in a way that the user can do everything they need through the newtype. It is pretty trivial to define arithmetic operations on your newtype for example

So, you have some good responses and I want to repeat that this is all just my opinion and personal experience. Thanks for the reply.

I won't address your criticism of our design because you don't know the details (and I'm not going into them). But we're not stupid people. Not collectively, at least... :) This attitude of "I've thought for three whole minutes and I can't see why anybody would ever want this, so the people telling me they want this are wrong" is kind of emblematic of Rust's design vs C++'s.

Yes and this is fantastic design because aliasing.

I agree it's nice that Rust solves the aliasing problem. They didn't have to solve it by confusing mutability and exclusivity, though! It's like if a plumber comes and fixes my toilet, but breaks some windows on the way out. The correct response to "hey, why did you break the windows" isn't "look, the toilet's fixed!"

I am not very knowledgable about this subject, but if const MyStruct contains a pointer in it, you can simply mutate the pointed data right?

You can if you can access it, yeah, but it'd be weird for the C++ object to give a user access to raw internal pointers. Note that in most cases you're also limited on what methods (including getters/setters) you can call on a "const MyStruct" - whether a function is read-only or not can be explicitly declared in its signature. Now, you don't have to use const at all in a library. Or you could use it but then have the same "internal mutability" as Rust - but why would you do that? The point of the keyword's existence is to allow programmers to positively assert "this is what a read-only version of this object looks like". Which is powerful and important and should be easier than it is in Rust.

If your complaint is that "const" doesn't force the C++ programmer to be safe, you're right about that. C++ gives options, not restrictions, and for bad programmers it is indeed full of footguns. My praise of "const" is due to what it can be in the hands of good programmers - and I would have loved it if Rust introduced the keyword but applied its stringent safety guarantees to it.

Usually when you newtype something, you are doing so because you don't want the inside type to be accessible as it is anymore, but only through a certain limited interface.

But the heap example I give is the exact opposite of this. I want access to the inside type (the number) - I never want to think about the heap's operator except at initialization - but instead I'm being forced to wrap it everywhere. Now, it's not entirely fair to compare Rust's library with C++'s STL. Like you said, "it can be annoying to find gaps like this in the language that takes years to close" - Rust is young, and the STL is the result of decades of finetuning/nitpicking work from thousands of people. I agree that, in the future, something like Rust's bad heap implementation might be a non-problem.

Types should be explicitly written out in code! They're a very important part of the logic!

Sometimes types shouldn't be explicitly written out in code because they're a very important part of the logic. If I write generic (templated) code that returns the heat capacity of a gas mixture at a given temperature, sometimes I just want that temperature to be a double so I can get a quick answer for a point's heat capacity, and other times I want it to be a Vector<DualNumber<double, SparseVector<int, double>>> so I can get SIMD or GPU code that gives me a hundred points' heat capacities as well as their derivatives with respect to the input data that was used to calculate temperature. There's basically no way I'm writing intermediate data types for such a calculation as anything but auto.

When designing even simpler library methods I'm also sadly kind of a fan of users writing auto out of laziness, too. If I ever accidentally expose too much of my internal data structures, use too small of a data type, etc. and have to change the API later, often I can change it in such a way that lazy auto users are still fully compatible with the upgraded version, but users who explicitly wrote foo::iterator can't compile after my switch to bar, and users who explicitly wrote int are now slicing my beautiful new size_t and are going to be unhappy years later when they run a problem big enough to overflow 2^31.

You make some good points, and I don't hate all uses of auto (I'll usually use it for iterators, too). Though making it so your code silently compiles when a library API changes isn't something I really approve of - it's not hard to see how that can go bad. Note that one of the things Rust does way, way better than C++ is package management - you don't have to update a library until you want to, and you can handle any compile errors at that time. As for your last scenario, another advantage Rust has over C++ is that it won't silently allow it to compile, since Rust has almost no implicit type conversions. I approve: Numbers changing types lossily is something that should be done explicitly.

And it's nice in a lot of situations, but like the "auto" keyword in C++, I think people abuse it far too much. Types should be explicitly written out in code!

Good lord, this, and it's become endemic in C# and Java, too, where it makes some of the absolute least sense.

I agree completely, I get incredibly frustrated every time I see this.

Most of the time I see "var" in C#, it's because the dev didn't want to have to track down the exact details of the type they're making a variable of because they understand the general shape of it but the wrappers and type details are annoying.

But the details of that type are important, and if they can't trivially figure it out when they're first writing the code, the other dev who comes in 2 years later to fix a bug with the code is going to have a way harder time.

In theory it's only supposed to be used for trivially-inferrable types like an int, but I very rarely see that because... if it's obvious it's an int, it takes exactly as much time to type "int" as to type "var".

Is your job all Rust, or does it have other redeeming features you stick around for?

If the former, what are your plans?

If none of the above, are you a masochist?

Well, there's the paycheck of course! And I did like my coworkers, even if they were a lot more gung-ho about Rust than I was. But I didn't actually stick around - as of October, I've moved back to Canada (from Silicon Valley) and am giving retirement a try...

So, I woodwork, and to keep the fear of merciless spinning metal teeth in me, I frequently watch woodworking safety videos. It's humbling listening to these woodworkers with 30 years experience talk about the time they accidentally fed their thumb through the tablesaw, or the heel of their hand through the jointer, or the tip of their pinky through the router.

99% of the stories start the same way. "I just had to make one more cut." They were in a rush, they were tired, they'd ticked off nearly every item on their todo list for the long day, and suddenly they realize they need one more piece rip cut. There is a safe way to do it, but they are so tired, and so exhausted, and they've done this so many times, they do it the unsafe way figuring it won't be a problem.

It turns out to be a big fucking problem.

Now woodworking is a physical as well as a mental task. There will be physical signs to your exhaustion and wavering judgement. If it's a hobby, you can just decide it's time to hang it up that day. If it's not, I guess that's why people lose thumbs.

A lot of these bugs in Rust code that keep going viral are stupid. But they also stink of a programmer utterly and thoroughly tapped out fighting the borrow checker. So mentally exhausted at the endless walls put up between the simple task they have to do that they've been working through one by one, that at the end of 8 hours their judgement is so impaired they decide "Fuck it, it's just one line of code, it doesn't need to be safe."

I do wonder if we'll start to see more and more problems with Rust code. Not problems caused by negligence, but problems caused by sheer exhaustion. Negligence can be fixed. Exhaustion I'm less certain about.

There is a safe way to do it, but...they've done this so many times, they do it the unsafe way figuring it won't be a problem.

This reminds me of the last time I worked in a rural setting as a junior. A nurse nonchalantly alerted me to a patient who she described as having had a laceration with some circular saw, with a demeanour suggesting that she thought the patient was a bit of a sissy and should've been told off and sent home.

I took a look at the wound and immediately blanched -- this man's second and third digits of his dominant hand was so obviously degloved that I can't imagine how the nurse could've seen the wound and called it a laceration, the subcutaneous tissue on both fingers was really trying its best to invent a new form of codex.

The only explanation I could think of how she could've given me that handover was that she didn't look at the wound at all and decided to short-cut the decision-making process after hearing a one-sentence summary of the history from the patient's wife.

I don't really have anything to say about the programming side of things, just wanted to share a story that also incidentally includes a woodworking incident and someone trying to take the easy way out.

A lot of these bugs in Rust code that keep going viral are stupid. But they also stink of a programmer utterly and thoroughly tapped out fighting the borrow checker. So mentally exhausted at the endless walls put up between the simple task they have to do that they've been working through one by one, that at the end of 8 hours their judgement is so impaired they decide "Fuck it, it's just one line of code, it doesn't need to be safe."

This isn't the way safety people think. They think the problem is not that people are tired of fighting their burdensome safety measures and so bypass them; they think the problem is that it's possible to bypass their safety measures, and so see this as reason to put in more controls.

That's not my objection to Rust -- Rust was created by and is controlled by my Culture War enemies, who inject their beliefs into their actions a lot more than even Richard Stallman ever did. But it is an objection to Rust.

Sounds almost like you're advocating SafeStart: Coding Edition.

I’m not sure this is culture war, beyond the degree to which Emacs vs. vim is culture war. That said, Rust has never really appealed to me. It strikes me more as a B&D C++ alternative than a C alternative, and I was somewhat surprised when Linus decided to allow it. I think that there is room for a systems language with the spirit of C but fewer undefined behaviors and better ergonomics around things like array bounds and bit bashing, but I don’t think any of C’s would-be successors has quite found the niche yet.

More on topic, I think that @FistfullOfCrows’ observation about Rust’s leadership is apropos. I usually take a code of conduct in open source projects as a statement that this is a self-consciously progressive space and that even relatively tactful (for programmers) dissent is unacceptable. Consequently I assume that I am not wanted there; I may still use the software depending on what it does, but I am not likely to provide bug reports, patches, or donations.

How seriously I take that depends on context. In Python I think that Guido, while a progressive sort of guy, is a restraining force; but since he has resigned the role of benevolent dictator, things have gotten messier. The FreeBSD CoC generated enough backlash and reassurances that I still take its implications with a grain of salt. SQLite’s code of ethics, by contrast, countersignals the code of conduct trend quite strongly, and it even managed to get lots of positive comments on HackerNews doing so.

For Rust, though? It’s not lost on me that Rust used to be a Mozilla project, and everything I see suggests that the culture that pushed out Brendan Eich lives on there. They’re not hiding their power level.

It’s not lost on me that Rust used to be a Mozilla project, and everything I see suggests that the culture that pushed out Brendan Eich lives on there.

Yeah unfortunately this is very much the case. I like the Rust programming language, but the Rust community is incredibly toxic. One of the worst communities online imo.

It strikes me more as a B&D C++ alternative than a C alternative,

If you are really into this sort of thing, you should consider Ada/SPARK: Rust is cavalier enough to let the programmer engage in potential integer overflows (in default production mode) and doesn't support specifying custom valid ranges (type my_integer is range -3 to 11).

In actuality, I like what they're aiming for, but I expect most of the benefit I'll personally see will be from upping the safety game of C++ (and C, to a lesser extent) via language extensions, automated tooling, and general best practices. I reflexively write tests, use C++11 pointer types, check pointers for nullptr before dereferencing, and use the .at() bounds-checked methods for container elements unless performance is impacted. That said, I do occasionally cause segfaults, still.

Cynical take about the open source programming languages world

The best systems engineers are trans and mentally unwell. Appearing progressive is how you keep them productive instead of spiralling. The 2nd best systems engineers are virgin gooners. Appearing progressive gives them a chance to be around women. The 3rd best systems engineers are m-lady neck beards. Appearing progressive is how they simp.

Everyone else who's good enough to be developing the rust lang is getting paid millions at a quant firm or millions at an llm frontier lab.

Surface level progressivism is win-win stable state for open source PL.

This is a semi-recent change. Two decades ago, hacker spaces were notoriously libertarian. The lesson of open source and the internet in general is that libertarians are utterly unable to defend against progressive take-over.

I think being mentally off is the cause of both the systems engineering skill and the trans.

The best systems engineers are trans and mentally unwell.

Objection: Fabrice Bellard is quite obviously not trans and nothing suggests he's mentally unwell either. The guy makes John Carmack look like a noob.

Well that's technically not true, they did. It's just that calling .unwrap(), a function which will immediately abort the application on error, counts as "handling" the error. In other words, the path of least resistance is not to actually handle the error, but to crash. I argue that this isn't a better outcome than what would have happened in C, which would also be to crash. Sure, the crash won't be a segfault in Rust, but that doesn't matter if half the Internet dies.

It is by no means the path of least resistance to unwrap errors. It's just as easy to write if let Ok(foo) = bar and handle the error in a non-panic way. The simple fact of the matter here is that the Cloudflare programmers went out of their way to crash if the program got to an invalid state. What is the language supposed to do to prevent that, not allow unwrap? That would hardly be an acceptable solution as unwrap is genuinely useful in some circumstances. People are quick to pick on Rust for this but I don't think there's anything Rust, or for that matter any programming language, could have done to prevent that outage.

Rust has these things called unsafe blocks that let you do unsafe memory operations, closer to what you would be allowed to do in C (though granted, I have heard convincing arguments that unsafe Rust is still generally safer than C). So the path of least resistance is not to do things the safest way, but to just surround everything in unsafe if you get tired of fighting the borrow checker.

This is a terrible argument. First of all, surrounding things in unsafe blocks doesn't do a damn thing to get rid of errors with the borrow checker. The borrow checker still applies inside unsafe blocks! And the CVE in Linux wasn't caused by "just surrounding everything in unsafe", but by a logic error inside the unsafe blocks they needed to use for their purpose. Again, what is Rust supposed to do? Not allow unsafe? It would be useless for its target audience then.

Lots of Rust people will admit the language has problems. I'm one of them; I think that the language has plenty of flaws. But what they aren't going to do is accept bad arguments that ask impossible tasks that no language can do, or that would render Rust unfit for its domain. Nor should they.

It is by no means the path of least resistance to unwrap errors. It's just as easy to write if let Ok(foo) = bar and handle the error in a non-panic way

Which is? You were supposed to get a list of features, it's always supposed to be there, what do you do? The person writing that code is thinking corrupted file or broken hard drive, panic is the correct thing to do. People are pointing fingers at that piece of code but to be honest I would point fingers upstream, to either the optimization that introduced the fixed size memory area or to the pipeline producing the features file who didn't check that the size of the file was under the limit.

Speaking of the optimization, the right thing to do there would have been to reallocate the struct to a bigger size if the preallocated one is insufficient. But I bet that would have introduced a bunch of ownership problems, hence we can circle back to this being a rust problem.

This is a terrible argument. First of all, surrounding things in unsafe blocks doesn't do a damn thing to get rid of errors with the borrow checker. The borrow checker still applies inside unsafe blocks!

This is a problem caused by updating a linked list, a data structure that rust has historically struggled to implement without using unsafe.

Which is? You were supposed to get a list of features, it's always supposed to be there, what do you do?

I'm not saying "panic is the wrong thing to do" in the situation they had. It might be the right thing to do! I'm just saying that blaming the language for the programmer choosing to panic is a bad line of argument.

An engineering organization doesn't have to accept the default compiler behavior for a language. They can use linters and other tools to restrict (or expand!) the kind of code that's acceptable. And they can have a culture that values thoroughness or that values moving fast and breaking things.

I think the best argument for something like Rust is that it makes it easier to guarantee quality where it matters to the organization. If quality doesn't matter to the organization, whether explicitly through tooling and coding standards or implicitly through seeing what gets people promoted or fired, then people will circumvent safeguards whatever language it is.

There lies the rub, though: the way Rust is being introduced defeats the best (and possibly only) argument for it. If they wanted to move fast and break thing, they can just stick to C. Hell in some of these cases the C code is even the thorough option as it's already been in use for many years, so it's well tested. Rewriting it in a completely new language, marketed entirely on memory safety, only to disable the safety features throughout the codebase is supposed to achieve what, exactly?

As a former haskell dev this reminds me why pure functional languages are uncommon in production. Pure functional languages are amazing 90% of the time but are a disaster 10% of the time. Since the 10% can derail a project people don't want to use them. The solution has been integrating functional features into multiparadigm languages so that devs can write 90% functional style code and then use imperative code where functional code just doesn't work well.

Rust's memory safety is great 90% of the time and becomes a blocker 10% of the time. A combination of using rust's memory features and unsafe operations allows for high flexibility and relatively high memory safety.

I don't know if the split is going to be 90/10 when you're messing around with the kernel. Also, when you're rewriting old code from scratch, the risk of introducing new bugs is pretty high. When you want to replace something that's been in production for years, if not decades, you'll need a better argument than "it's perfectly safe 90% of the time".

You haven't mentioned that to do anything even slightly kernel-y or touching hardware you would need to have unsafes sprinkled throughout your code. I find Rust to be churn for churn's sake. Additionally infused with woke shit all through out its governance body.

I'm with you there. I didn't mention any of those issues in my post for the sake of time, but the behaviour of activists in the Rust community (granted, like all activists they are a vocal minority) is very off-putting.

Are they, though? It seems like Rust has more of 'em than many other languages and they're at every level from solo projects to people involved with the foundation.

I don't think it's unfair to say that 90% of Rust programmers just want to code and be left alone, while the 10% are either vocal or they make themselves very visible when you trawl through the 9,001 dependencies of any Rust crate. But it's hard to quantify when the rot seems to be everywhere. Python forced out Tim Peters, after all. I like Null's take about the best response to people who hate you being to say fuck you and use their software anyway.