site banner

Culture War Roundup for the week of January 29, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

If anything, working at Google actually made me a lot more confident about their PII protections. They take it extremely seriously and I'm actually surprised so many people were able to abuse it, though it's to be expected at their scale: Google has 175,000 employees and maintains billions of accounts.

To me, this is the exception that proves the rule: you're safer with Google.

I agree, when I worked at Google I remember their security measures being extremely well-thought-out - so much better than the lax approach most tech companies take. However, I DON'T trust their ideological capture. They won't abuse people's information by accident, but I will not be surprised if they start doing it on purpose to their outgroup. And they have the tools to do it en masse.

Either for ideology or just to squeeze out a few more dollars. If Google's moats start falling, and their profits start falling with them, the first sign will just be that products start being less good. This is understandable and fine; they won't have as much funny money to blow on non-profit-centers that only add marginally, but that customers like. But if it gets really bad, well, there's always Baker's Law: "You never know how evil a technology can be until the engineers who created it fear for their jobs."

I've never heard of that (and DDG brings up nothing except stuff that looks more relevant to biology), what are some examples of tech-gone-bad like that?

The extremely low-level version of this is the classic example of a free, simple app. I heard the story of one recently that was just an app that let you change the brightness of the flashlight on some phones that didn't have that functionality built in. It started off just having basic ads. But as it became less and less profitable, crowded out by things like more phones having it just built-in, they saw the writing on the wall. Presumably, they just sold it to someone else, but I don't know in this particular case. In any event, either the original owner or someone who bought it added really obtrusive video ads... and then snuck in a $15/month subscription charge. Basically just banking on it already being installed in some number of phones, and some number of them not really noticing or accidentally clicking the wrong thing and not noticing and such. This is the really simple version.

I'd have to try to go back and see if I can find any real examples, but you can imagine that an app that collects a bunch of data on you, maybe biometric data and such, could end up on a downward spiral, profitwise. Who freaking knows how they'll sell it in order to make that last buck? Who knows what form of shady scaremongering they could do, "We see that you have this gene, and you're really in danger of [medical problem] (that is barely supportable by the scientific evidence), so you really ought to consult with [our shady partner who sells you some worthless shit and kicks us back money]."

Actually, just as I finished writing that, I thought of the example of virus protection software. That shit was constantly burying itself deeper and deeper into your system, until it had basically unfettered access to everything. Lots of people kept using it, mostly out of inertia. As it started getting squeezed out of the market, they started squeezing customers harder and causing all sorts of problems, not least of which is the tension between, "If our software has a vulnerability, attackers can use that to get deep access to your system, but you're probably oblivious to the details of how that works, so we're actually kind of okay with it, so long as it scares enough people to keep paying the subscription."

Jeez, never thought about it like that.

Gin, mdb, rpcsp... Security there is taken very seriously. There're always potential holes in the system, but I trust Google much more to keep my data safe against realistic adversaries than anyone's homelab duct taped together with VLANs and reverse proxies. (And at least 90% of alternative non-Google third party hosts are honeypots, either out of incompetence or malice.)

The danger with Google is that Google co-operates with the authorities, either voluntarily, "voluntarily", or because they've been literally infiltrated.

It's absolutely fair to say that, if you're doing something the government places a high priority on detecting and punishing, Google is not the place to put digital evidence of that something. And that's a certainty.

The issue comes in when someone believes that there exist digital safes that no one but they can open. You're not going to build one in your spare time, and you're certainly not going to find one in other well-known third-party services (which are equally compromised by the government and less secure than Google) or in unknown fly-by-night services (half of which are government honeypots, and the other half are people waiting to do a rug pull to steal all your bitcoin and which are probably breached by the government anyway).