site banner

Culture War Roundup for the week of April 1, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

FOSS and The XZ Problem

Security Boulevard reports:

A critical vulnerability (CVE-2024-3094) was discovered in the XZ Utils library on March 29th, 2024. This severe flaw allows attackers to remotely execute arbitrary code on affected systems, earning it the highest possible score (10) on both the CVSS 3.1 and CVSS 4.0 scoring systems due to its immediate impact and wide scope.

The exploit would allow remote code execution as root in a wide majority of systemd-based Linux (and Mac OSX, thanks homebrew!) machines. There's some reasonable complaints that some CVE ratings are prone to inflation, but this has absolutely earned a 10/10, would not recommend. Thankfully, this was caught before the full exploit made it to many fixed release Linux distros, and most rolling-release distros either would not have updated so quickly or would not yet be vulnerable (and, presumably, will be updating to fixed versions of XZ quickly), with the exception of a handful of rarely-used Debian options. Uh, for the stuff that's been caught so far.

Summary and FAQ, for the more technically minded reader, the NIST CVE is here, background of initial discovery at here.

Ok, most of us who'd care remember Heartbleed. What's different here?

In this case, the exploit was near-certainly introduced intentionally by a co-maintainer of the library XZ Utils, by smuggling code into a binary test file, months apart from adding calls to execute that test file from live environments, and then working to hide any evidence. The combination of complexity in the attack (requiring fairly deep knowledge of a wide variety of Linux internals) and bizarreness of exploit steps (his FOSS history is sprinkled with a replacing safe functions with their unsafe precursors, or adding loose periods in cmake files) leaves nearly zero chance that this is unintentional, and the guy has since disappeared. He was boosted into co-maintainership only recently, and only after the original maintainer was pressured to pick him up by a strangely large barrage of very picky users. The author even pushed to have these updates shoved into Fedora early.

Most mainstream technical advisories aren't outright calling this a nation-state actor, but The Grugq is pretty willing to describe whoever did it as an 'intelligence agency', whether government or private, and with cause. Both the amount of effort and time put into this attack is vast, and the scope of vulnerability it produced extreme -- though this might be the 'cope' answer, since an individual or small-private-group running this level of complex attack is even more disturbing. It's paranoid to start wondering how much of the discussion aimed encouraging XZ's maintainer to take on the bad actor here as a co-maintainer, but as people are having more and more trouble finding evidence of their existence since, it might not be paranoid enough.

There's a lot of potential takeaways:

  • The Many Eyes theory of software development worked. This was an incredibly subtle attack that few developers would have been able to catch, by an adversary willing to put years into developing trust and sneaking exploit in piecemeal.

  • Except it was caught because a Microsoft (Postgres!) developer, without looking at the code, noticed a performance impact. Shit.

  • This attack heavily exploited access through the FOSS community: the author was able to join sight-unseen through a year of purely digital communications, and the 'business decision' of co-maintainership came through a lot of pressure from randos or anons.

  • Except that's something that can happen in corporate or government environments, too. There are places where every prospective employee gets a full background check and a free prostate exam, but they're the outlier even for dotmil spheres. Many employers are having trouble verifying that prospective recruits can even code, and most tech companies openly welcome recent immigrants or international workers that would be hard to investigate at best. Maybe they would have recognized that the guy with a stereotypical Indian name didn't talk like a native Indian, but I wouldn't bet on even that. And then there's just the stupid stuff that doesn't have to involve employees at all.

  • The attack space is big, and probably bigger than it needs to be. The old school of thought was that you'd only 'really' need to do a serious security audit of services actually being exposed, and perhaps some specialty stuff like firewall software, but people are going to be spending months looking for weird calls in any software run in privileged modes. One of many boneheaded controversial bits of systemd was the increased reliance on outside libraries compared to precursors like SysV Init. While some people do pass tar.xz around, XZ's main use in systemd seems to be related to loading replacement keys or VMs, and it's not quite clear exactly why that's something that needs to be baked into systemd directly.

  • But a compression library seems just after cryptographic libraries are a reasonable thing to not roll your own, and even if this particular use for this particular library might have been avoidable, you're probably not going to be able to trim that much out, and you might not even be able to trim this.

  • There's a lot of this that seems like the chickens coming home to roost for bad practices in FOSS development: random test binary blobs ending up on user systems, build systems that either fail-silently on hard-to-notice errors or spam so much random text no one looks at it, building from tarballs, so on.

  • But getting rid of bad or lazy dev practices seems one of those things that's just not gonna happen.

  • The attacker was able to get a lot of trust so quickly because significant part of modern digital infrastructure depended on a library no one cared about. The various requests for XZ updates and co-maintainer permissions look so bizarre because in a library that does one small thing very well, it's quite possible only attackers cared. 7Zip is everywhere in the Windows world, but even a lot of IT people don't know who makes it (Igor Patlov?).

  • But there's a lot of these dependencies, and it's not clear that level of trust was necessary -- quite a lot of maintainers wouldn't have caught this sort of indirect attack, and no small part of the exploit depended on behavior introduced to libraries that were 'well'-maintained. Detecting novel attacks at all is a messy field at best, and this sort of distributed attack might not be possible to detect at the library level even in theory.

  • And there's far more varied attack spaces available than just waiting for a lead dev to burn out. I'm a big fan of pointing out how much cash Google is willing to throw around for a more visible sort of ownage of Mozilla and the Raspberry Pi Foundation, but the full breadth of the FOSS world runs on a shoestring budget for how much of the world depends on it working and working well. In theory, reputation is supposed to cover the gap, and a dev with a great GitHub commit history can name their price. In practice, the previous maintainer of XZ was working on XZ for Java, and you haven't heard of Lasse Collin (and may not even recognize xz as a file extension!).

  • ((For culture war bonus points, I can think of a way to excise original maintainers so hard that their co-maintainers have their employment threatened.))

  • There's been calls for some sort of big-business-sponsored security audits, and as annoying as the politics of that get, there's a not-unreasonable point that they should really want to do that. This particular exploit had some code to stop it from running on Google servers (maybe to slow recognition?), but there's a ton of big businesses that would have been in deep shit had it not been recognized. "If everyone's responsible, no one is", but neither the SEC nor ransomware devs care if you're responsible.

  • But the punchline to the Google's funding of various FOSS (or not-quite-F-or-O, like RaspberryPi) groups is that even the best-funded groups aren't doing that hot, for even the most trivial problem. Canonical is one of the better-funded groups, and it's gotten them into a variety of places (default for WSL!) and they can't bother to maintain manual review for new Snaps despite years of hilariously bad malware.

  • But it's not clear that it's reasonable or possible to actually audit the critical stuff; it's easier to write code than to seriously audit it, and we're not just a little shy on audit capabilities, but orders of magnitude too low.

  • It's unlikely this is the first time something like this has happened. TheGrugq is professionally paranoid and notes that this looks like bad luck, and that strikes me more as cautious than pessimistic.

The Many Eyes theory of software development worked. This was an incredibly subtle attack that few developers would have been able to catch, by an adversary willing to put years into developing trust and sneaking exploit in piecemeal.

I've watched a lot of doomerist takes on this one claiming that this proves many-eyes doesn't work, but I think it proves just the opposite. This was perhaps the most sophisticated attack on an open source repo ever witnessed, waged against an extremely vulnerable target, and even then it didn't come even close to broad penetration before it was stopped. Despite being obvious it bears laboring that it wouldn't have been possible for our Hero Without a Cape to uncover it if he wasn't able to access the sources.

If I had to guess, I would suppose that glowing agencies the world round are taking note of what's happened here and lowering their expectations of what's possible to accomplish within the open source world. Introducing subtle bugs and hoping they don't get fixed may be as ambitious as one can get.

That being said, I'm not sure that the doomerism is bad. The tendency to overreact may very well serve to make open source more anti-fragile. Absolutely everyone in this space is now thinking about how to make attacks like this more difficult at every step.

This was perhaps the most sophisticated attack on an open source repo ever witnessed, waged against an extremely vulnerable target, and even then it didn't come even close to broad penetration before it was stopped.

Witnessed is a little important, here; I'm not as sure as TheGrugq that this isn't the first try at this, if only because no one's found (and reported) a historical example yet, but I'm still very far from confident it is the first. And it did get really close: I've got an Arch laptop that has one part of the payload.

Despite being obvious it bears laboring that it wouldn't have been possible for our Hero Without a Cape to uncover it if he wasn't able to access the sources.

That's... not entirely clear. Visible-source seems to have helped track down the whole story, as did the development discussions that happened in public (though what about e-mail/discord?), but the initial discovery seems like it was entirely separate from any source-diving, and a lot of the attack never had its source available until people began decompiling it.

The tendency to overreact may very well serve to make open source more anti-fragile. Absolutely everyone in this space is now thinking about how to make attacks like this more difficult at every step.

Yeah, that part is encouraging; I've definitely seen places (not just in code! aviation!) where people look at circumstances like this and consider it sign the were enough redundancy, rather than enough redundancy for this time. I think it's tempting to focus a little too much on the mechanical aspects, but that's more a streetlamp effect than an philosophical decision.

Visible-source seems to have helped track down the whole story, as did the development discussions that happened in public (though what about e-mail/discord?), but the initial discovery seems like it was entirely separate from any source-diving, and a lot of the attack never had its source available until people began decompiling it.

There isn't any evidence directly supporting this, but I saw a claim that this entire claim of discovery ("slow SSH connections") could easily enough be parallel construction prompted by The Powers That Be (tm) aware of this effort for other reasons -- which range from "we know our adversaries are trying to insert this code" to "we run our own audits but don't want to reveal their details directly". Even in such a situation it's a bit unclear who the parties would be (nation-state intelligence agencies, possibly certain large corporations independent of their respective governments). The obvious claim would be something like "NSA launders data to Microsoft to foil North Korean hacking attempts", but "China foils NSA backdoor attempts" isn't completely implausible either.

That said, I can only imagine that sneaking a backdoor like this into proprietary build systems would be even less likely to be detected: the pool of people inspecting the Windows build system is much smaller than those looking at (or at least able to look at) libxz and it's (arcane, but somewhat industry standard) autoconf scripts.

Also, this is the sort of thing that I have a vested interest in as a long-time personal and professional Linux user. I have the skills ("a very particular set of skills") to follow the details of issues like this, but there isn't yet any organized way to actually contribute them. I'd be willing to spend maybe a few hours a month auditing code and reviewing changes to "boring" system packages, but I'd never think to look at libxz specifically, or to get enough "street cred" for people to actually take any feedback I have seriously. And even then, this particular issue is underhanded enough to be hard to spot even when an expert is looking right at it. Does anyone have any suggestions for getting involved like this?