site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

I remember back in 2016 I was sitting on my cousin's deck for one of his kid's first birthday parties, and my uncle posed a question to the group of whether the kid in question would ever get a driver's license. Now, he has a habit of going out on certain limbs when arguing, but he seemed utterly convinced that fifteen years hence autonomous vehicles would be so ubiquitous as to obviate the need for any driver training among normal people. I argued against the idea, but only to the extent that the regulatory landscape wouldn't change that fast—I certainly thought the technology would be there, but I doubted that regulators and insurance companies would have the stomach to turn all operations over to computers. OF course, that was around the time where everyone was talking about AVs. A guy near me trying to win the Democratic nomination for state rep was basing his entire campaign on handling the disruption that would soon wreak havoc on the trucking industry. I saw Uber's AVs on an almost daily basis near my office in Pittsburgh. CGP Grey was making videos about how full autonomy would basically solve traffic congestion, at least as long as you don't give a fuck about pedestrians.

This summer, that kid will be halfway toward qualifying for a learner's permit, and autonomous vehicles seem further away now than they did when he was one. Less than two years after that party, a woman in Arizona was killed after being hit by an Uber self-driving car. From the evidence available, it didn't look to me like the accident was avoidable, and had it involved a standard car it would have made the local news for a couple days but probably wouldn't have even resulted in charges being filed. But since it was an AV the story went national, and the public's trust had eroded. It would be easy to blame this incident for the downfall of enthusiasm over EVs, but let's face it; something like this happening was only a matter of time, and the public response was entirely predictable. So the industry plugged along, and keeps plugging along, though fewer and fewer people seem to care. Uber's out, Ford's out, Volkswagen's out, GM is under investigation, Apple seems directionless and indifferent, and a recent Washington Post article claims that Tesla cut quite a few corners in its pursuit of offering its customers something that could be marketed as progress.

Hype for AVs started picking up in earnest among the tech horny around 2012. Three years later the buzz was mainstream. All throughout this period various industry leaders kept making bold predictions about truly autonomous products being only a few years away. Okay, maybe with some caveats, like only on the highway, or in geofenced areas, or whatever, but still, you'd at least be able to get something that had some degree of real autonomy. The enthusiasm seemed justified, though, since, practically overnight, self-driving cars went from something that you'd occasionally hear about in science magazines when some university was doing basic research to something that major tech and auto companies were sinking billions of dollars into. Around the same time, regular cars started getting features like adaptive cruise control and lane keep assist that seemed like self-driving under another name, and Tesla's autopilot feature seemed like a huge leap. With the normal acceleration of technology plus the loads of money that were being dumped into any number of competing companies, it was only a matter of time. Now, ten years and 100 billion dollars later, the only products that are available to an average consumer are a few unreliable ridesharing services in cities that don't have weather.

I'm bringing this up because there are a lot of parallels between AVs and GPT-4. This is a huge, disruptive technology that relies on AI, and, while it may have some critical flaws in its current implementation, technology is constantly improving, often exponentially, as processing power increases. And while I don't have access to GPT-4 myself, I'm sure it's as impressive as everyone claims it is. The trouble is, impressing people with no skin in the game is easy. Convincing people to rely on it is a whole different animal. Most people found AVs pretty impressive when they first came out. But being impressive doesn't cut it when you're looking to replace human drivers; you actually have to be better than human drivers, or at least as good as human drivers. And human drivers are pretty damn good. In 2021 there were around 5.2 million reportable accidents in nearly 3 trillion miles driven (in PA an accident is reportable if one of the cars is inoperable or there is injury or death, though other states may vary). This means that, in any given mile of driving, one's chances of getting into an accident more serious than a fender bender is .000181%. If you drive 15,000 miles a year, you'll get into an accident about once every 30 years. If Elon Musk or whoever announced that they had developed a system that avoided accidents 99.9% of the time, that would sound impressive. But it wouldn't be; at that rate, the average driver would be getting into about 15 crashes per year! Even if it were 99.99% of the time you'd still be getting into more than a crash a year, 3 in a 2 year period. Imagine what your insurance rates would be like if you got into a crash a year.

And that doesn't even take into account all the miscellaneous bullshit that AVs do that doesn't cause accidents but nonetheless makes them untenable. They have trouble with unprotected left turns (aka most left turns), and they'll take circuitous routes to avoid them. They don't like construction, even minor construction like a lane being blocked off with cones. They get confused when, say, a landscaper has mulch bags hanging into the street a little bit. Or when driving down a narrow street with cars parked on both sides. And when this happens they just stop and call home. The people who use these ride sharing services are then forced to wait while a tech shows up to deal with the problem, traffic being disrupted in the meantime. And I won't even mention inclement weather. Making something look impressive during early testing is easy, but convincing someone to rely on it when safety, or money, or anything else that actually matters is at stake is a much harder sell, as the accuracy has to be pretty damn near close to 100% before anyone will actually trust it. And if AVs are any indication, it's really hard to get to 100%. Which is why I wouldn't be surprised if AI right now is at about the same stage AVs were in 2016. Impressive, but far from ready for prime time. Everyone keeps saying that the next iteration is going to be a game changer, and everyone is increasingly impressed, but not impressed enough to trust their business to it. And eventually it gets to the point where research is so expensive and the returns are so little that no one in their right mind would invest in it, and smaller firms go bust while larger ones scale back considerably, or at least try to direct their AI research towards applications where it might actually be used commercially. Then we're all sitting here in 2030 asking ourselves what happened to the AI revolution that seemed right around the corner. I could be wrong, but if that's the case, then hey, we should at least have some operable self-driving cars.

The fourth law of thermodynamics states that the first 90% of the project takes 90% of the time, and the last 10% of the project takes the other 90% of the time.

It blows my mind how so many people who really should know better are so eager to declare that they’ve reached the finish line, because “there are only a few minor problems left that will be resolved in a year or two”. Anyone who has delivered a non-trivial product to real users (particularly in software engineering) knows just how much time and effort gets eaten up during the long slog to 100%, and how many unforeseen problems can crop up.

You’ve reached the finish line when you’ve reached the finish line. You’ve reached the finish line when the product works as advertised and is being used for its intended purpose. Not before.

And yet I think with self driving and AI we should ask for good enough. I want rosie from the jetsons for my home and a car that can take me home safe enough on normal road conditions. If Rosie folds from time to time the family cat with the other laundry - I can live with that.

And yet I think with self driving and AI we should ask for good enough.

We most certainly should not.

AVs are a classic example of a technology where "good enough" isn't good enough. AVs need to be at least as safe as a top human driver; personally, I want them to be strongly superhuman in all driving conditions before I would consider getting into one. I'm not going to die because some tech CEO just thought it would be really cool to put his AV on public highways before it was actually ready. I don't know why this would even be up for debate.

In general I'm sick of "good enough". I want correct. I want quality. From the industrial revolution up until the present day, automation has frequently been accompanied by a decline in the quality of goods and services (not to mention a decline in working conditions - the Luddites weren't anti-technology per se, they were anti- working 12 hour factory shifts for a pittance). Has customer service gotten better because of automation? "Let me talk to a damn human" has become a common refrain. You know, you have writers being told "we know you're a lot better than GPT, but we're going with GPT anyway because it's cheap and good enough". This should not be acceptable. I mean on a broad cultural level, it should be baked into the fabric of people's attitudes that this is not acceptable.

I am an unabashed supporter of the MIT school of design over the New Jersey school. I think this is part of the reason why LLMs just irritate me on a fundamental level - it's because they got so far, so fast, using only bull-headed statistical methods that required no prior theoretical understanding of the structure of language, and make no guarantees about the robustness or the correctness of the underlying system. It feels like cheating, basically. I understand Chomsky's chagrin well. It feels like something very important to you, something you care deeply about, is being rejected by the laws of nature.

With each passing year I find more and more reason to be sympathetic to the Marxist position. If this is capitalism, then capitalism sucks.

I'm not going to die because some tech CEO just thought it would be really cool to put his AV on public highways before it was actually ready. I don't know why this would even be up for debate.

No, you're going to die because of a preference to die under the wheels of a pothead scrolling their phone, a drunk that doesn't like cyclists, or just regular old human error instead. Of course, you probably won't wind up dying in any of these ways, but the obvious question is which policy gives you lower odds of dying rather than whether a system can prevent all errors under all circumstances.

Has customer service gotten better because of automation? "Let me talk to a damn human" has become a common refrain.

In my experience, yes, it has. I can frequently resolve problems using automated systems that are easier and faster to use than working with humans. For example, if my flight can't take off due to weather and I'm going to miss my connection, I can hop on my phone and the system will have suggested options for me to switch to; I can see the options myself and select which one is best for my travel preferences. A human could do that too, but they have to do it for every individual trying to make that connection, which results in a big line. If I have anything weird that I'd prefer, I have to explain it rather than just pressing the buttons.

On the flip side of things, my experience with "let me talk to a damn human" is that it's usually someone looking for some sort of special favor that's outside of policy and may or may not be possible. I have repeatedly experienced this mentality from people that I've worked with in customer service that really want to get on the phone to explain their situation, when I actually don't care what their reasons are and I'm just going to reiterate that the policy is what it is, I will not be giving you anything free or making exceptions because you really want them.

So yeah, on net, I would consider automated solutions to have sharply improved my experience as both a customer and service provider.

The long and short of it with AVs is that driving is one of my main sources of relaxation and pleasure, and so I'll only give up my manual car when you pry it out of my cold dead hands. I could take a more "objective" view of the minutiae of AV policy if I was pressed to. But I'm not particularly interested in doing so.

my experience with "let me talk to a damn human" is that it's usually someone looking for some sort of special favor that's outside of policy and may or may not be possible.

I was definitely subconsciously reaching for this notion when I brought up customer service. Thank you for articulating it for me.

Society should be set up in such a way that special favors are possible. It should be malleable, pliable, it should admit of edge cases and exceptions. It should be conceivable that you can lean on a human's empathy or frailty or inadequacies in order to get things done. That's what a human society looks like.

To be sure, the failure modes of such a style of operation when taken to excess are well understood. But it's still preferable to the alternative of a fully mechanized and perfectly efficient society, with all its cold digital exactitude. I don't want every public and private institution to operate like Google and Youtube - no way to talk to a human, no edges or seams, solid, impenetrable, immovable. Do you want your employer to operate that way? Or the criminal justice system? Will JudgeGPT be susceptible to the eleven magic words? Let's hope he has a particularly wise philosopher-king in charge of his alignment and RLHF.

You want it both ways then man. You can't have a society that does things correctly instead of good enough but is also malleable enough to allow for people to make special accommodations for you - special accommodations are not correct. I think what you are asking for is something a lot of people want - it's definitely something I'd like - a return to the society of the past where things were basically correct, but the human factor meant a savvy operator could extract accommodations, because you are a savvy operator. But we can't have that, because we already taught everyone to be a savvy operator. We poisoned the well ourselves with our savvy operations.

You want it both ways then man. ... special accommodations are not correct.

Au contraire.

A computer system is not a work of art; a work of art is not a human individual; and a human individual is not society as a whole. Things that are distinct should be judged by their own distinct standards that are proper to them. A standard of correctness that applies to one type of thing may not apply to another type; indeed, the entire notion of correctness may be appropriate to one category but actively detrimental to another.

Not that I have any particular qualms about contradicting myself anyway. Contradiction bears witness to the life of thought.

But we can't have that

I want what I want, based on my judgment of what is good and proper. It's no skin off my back if I "can't have it".

Sorry, I didn't explain myself well enough, I'll try again.

I'm not going to die because some tech CEO just thought it would be really cool to put his AV on public highways before it was actually ready. I don't know why this would even be up for debate.

It's up for debate because of your contradictions. The tech ceo who thinks being cool is more important than superhumanly safe got there because he's a savvy operator. Savvy operating is cool. He doesn't exist in the world where you can't get special exceptions, because in that world he's not a ceo, he's in prison for fraud or malpractice or the like. But in a world where you can game human foibles to your own advantage, every human foible will be gamed.

I want what I want, based on my judgment of what is good and proper. It's no skin off my back if I "can't have it".

I don't understand what you mean, if you can't have what you want isn't that exactly skin off your back? Or is it just the wanting that gets your motor going?