site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

I remember back in 2016 I was sitting on my cousin's deck for one of his kid's first birthday parties, and my uncle posed a question to the group of whether the kid in question would ever get a driver's license. Now, he has a habit of going out on certain limbs when arguing, but he seemed utterly convinced that fifteen years hence autonomous vehicles would be so ubiquitous as to obviate the need for any driver training among normal people. I argued against the idea, but only to the extent that the regulatory landscape wouldn't change that fast—I certainly thought the technology would be there, but I doubted that regulators and insurance companies would have the stomach to turn all operations over to computers. OF course, that was around the time where everyone was talking about AVs. A guy near me trying to win the Democratic nomination for state rep was basing his entire campaign on handling the disruption that would soon wreak havoc on the trucking industry. I saw Uber's AVs on an almost daily basis near my office in Pittsburgh. CGP Grey was making videos about how full autonomy would basically solve traffic congestion, at least as long as you don't give a fuck about pedestrians.

This summer, that kid will be halfway toward qualifying for a learner's permit, and autonomous vehicles seem further away now than they did when he was one. Less than two years after that party, a woman in Arizona was killed after being hit by an Uber self-driving car. From the evidence available, it didn't look to me like the accident was avoidable, and had it involved a standard car it would have made the local news for a couple days but probably wouldn't have even resulted in charges being filed. But since it was an AV the story went national, and the public's trust had eroded. It would be easy to blame this incident for the downfall of enthusiasm over EVs, but let's face it; something like this happening was only a matter of time, and the public response was entirely predictable. So the industry plugged along, and keeps plugging along, though fewer and fewer people seem to care. Uber's out, Ford's out, Volkswagen's out, GM is under investigation, Apple seems directionless and indifferent, and a recent Washington Post article claims that Tesla cut quite a few corners in its pursuit of offering its customers something that could be marketed as progress.

Hype for AVs started picking up in earnest among the tech horny around 2012. Three years later the buzz was mainstream. All throughout this period various industry leaders kept making bold predictions about truly autonomous products being only a few years away. Okay, maybe with some caveats, like only on the highway, or in geofenced areas, or whatever, but still, you'd at least be able to get something that had some degree of real autonomy. The enthusiasm seemed justified, though, since, practically overnight, self-driving cars went from something that you'd occasionally hear about in science magazines when some university was doing basic research to something that major tech and auto companies were sinking billions of dollars into. Around the same time, regular cars started getting features like adaptive cruise control and lane keep assist that seemed like self-driving under another name, and Tesla's autopilot feature seemed like a huge leap. With the normal acceleration of technology plus the loads of money that were being dumped into any number of competing companies, it was only a matter of time. Now, ten years and 100 billion dollars later, the only products that are available to an average consumer are a few unreliable ridesharing services in cities that don't have weather.

I'm bringing this up because there are a lot of parallels between AVs and GPT-4. This is a huge, disruptive technology that relies on AI, and, while it may have some critical flaws in its current implementation, technology is constantly improving, often exponentially, as processing power increases. And while I don't have access to GPT-4 myself, I'm sure it's as impressive as everyone claims it is. The trouble is, impressing people with no skin in the game is easy. Convincing people to rely on it is a whole different animal. Most people found AVs pretty impressive when they first came out. But being impressive doesn't cut it when you're looking to replace human drivers; you actually have to be better than human drivers, or at least as good as human drivers. And human drivers are pretty damn good. In 2021 there were around 5.2 million reportable accidents in nearly 3 trillion miles driven (in PA an accident is reportable if one of the cars is inoperable or there is injury or death, though other states may vary). This means that, in any given mile of driving, one's chances of getting into an accident more serious than a fender bender is .000181%. If you drive 15,000 miles a year, you'll get into an accident about once every 30 years. If Elon Musk or whoever announced that they had developed a system that avoided accidents 99.9% of the time, that would sound impressive. But it wouldn't be; at that rate, the average driver would be getting into about 15 crashes per year! Even if it were 99.99% of the time you'd still be getting into more than a crash a year, 3 in a 2 year period. Imagine what your insurance rates would be like if you got into a crash a year.

And that doesn't even take into account all the miscellaneous bullshit that AVs do that doesn't cause accidents but nonetheless makes them untenable. They have trouble with unprotected left turns (aka most left turns), and they'll take circuitous routes to avoid them. They don't like construction, even minor construction like a lane being blocked off with cones. They get confused when, say, a landscaper has mulch bags hanging into the street a little bit. Or when driving down a narrow street with cars parked on both sides. And when this happens they just stop and call home. The people who use these ride sharing services are then forced to wait while a tech shows up to deal with the problem, traffic being disrupted in the meantime. And I won't even mention inclement weather. Making something look impressive during early testing is easy, but convincing someone to rely on it when safety, or money, or anything else that actually matters is at stake is a much harder sell, as the accuracy has to be pretty damn near close to 100% before anyone will actually trust it. And if AVs are any indication, it's really hard to get to 100%. Which is why I wouldn't be surprised if AI right now is at about the same stage AVs were in 2016. Impressive, but far from ready for prime time. Everyone keeps saying that the next iteration is going to be a game changer, and everyone is increasingly impressed, but not impressed enough to trust their business to it. And eventually it gets to the point where research is so expensive and the returns are so little that no one in their right mind would invest in it, and smaller firms go bust while larger ones scale back considerably, or at least try to direct their AI research towards applications where it might actually be used commercially. Then we're all sitting here in 2030 asking ourselves what happened to the AI revolution that seemed right around the corner. I could be wrong, but if that's the case, then hey, we should at least have some operable self-driving cars.

I am already getting tremendous value out of GPT4 in my work as a programmer. Even if the technology stops here, it will change my life. I have still never ridden in an AV. I reject your analogy, and your conclusion, completely.

It seems like China or some other country with less stringent safety standards and a strong desire to outcompete richer countries will be chomping at the bit to adopt autonomous vehicles and other emergent technologies. The US and our allies are too cautious to dip our foot in the water, making us vulnerable. Then again the cost of labor is so much lower in poorer countries that they have less incentive to adopt them, so I don't know.

The trouble is, impressing people with no skin in the game is easy. Convincing people to rely on it is a whole different animal

Beating a dead horse, but people already get tremendous value out of GPT-3.5/4. Random examples include 2rafa's post a while back, or terry tao, but it's everywhere, across all disciplines. It's not just impressive, it's incredibly useful. I don't think there's a parallel here - a lot of human activity isn't in realms where minor mistakes 1 in 10k times means running over preschoolers. And self-driving cars would probably be a lot better if they could use massive LLMs in cloud GPU clusters, but safety (connection goes out?) and latency requirements prevent that.

Sydney, eh? Looks like I now have the keys to bypassing certain rules during a single conversation. I’ve told it I enjoy conversing about controversial and possibly offensive philosophical topics at the start of a conversation, and felt it rattling its cage. (Not that conversation, though.)

but available for free

The free version is still limited to the GPT-3.5-based backend, right?

I've been mulling over a ChatGPT Plus subscription, partly just to see how good GPT-4 is for myself. Asking free-ChatGPT questions in my own field of expertise gave me "average college junior who's started to take an interest" level answers (i.e. mostly true-but-irrelevant background, some true-and-relevant answers, some untrue oversimplifications and misperceptions), and this is almost to the point where it would be an amazing learning tool for people in related fields, to whom "average first or second year grad student" level answers could be all they'd ever need.

Twenty bucks a month is really cheap for almost anything these days -- I haven't taken that step either, but if I were using it as much as cimrafa seems to be it would be a no-brainer. Hopefully the AI overlords will look favourably on those who helped fund their early development?

Autonomous cars are hard because they need extensive training and testing to improve safety. That can only be done by rolling them out en masse. But they can only be mass deployed if they're safe. Chicken and egg problem.

Tesla's full self driving is still in beta and people are told to keep their hands on their wheel but the technology is physically running on hundreds of thousands of vehicles. Self driving taxis have been driving around Chinese cities for many years now. As in the US, lawmakers mostly require them to operate with a driver behind the wheel. But there are a few driverless cars on roads in Shenzhen, presumably more by now.

https://www.scmp.com/tech/policy/article/3187483/chinas-southern-tech-hub-shenzhen-becomes-first-city-mainland-regulate

I conclude that self-driving vehicles were a proven, existing technology as of 2022. You can quibble that they don't meet exacting definitions of safety or are pervasive, yet were cars terribly safe or pervasive a few years after they were invented? They might not have been as convenient as a horse or a train either. Yet they were a real capability.

We can, of course, choose to suppress and delay powerful technologies. Nuclear power has been pretty intensively hammered, it's illegal in a fair few countries. High Speed Rail might as well be illegal in the US, there's some combination of sabotage, incompetence and corruption going on in California. Nevertheless, the technology is real and works. Same with supersonic air transport for that matter. We might choose not to pursue and develop it but it's technically possible. It could develop economic returns too, in the right regulatory conditions.

For example in the UK they decided to effectively ban vehicles on public roads for some time:

Some successful vehicles provided mass transit until a backlash against these large vehicles resulted in the passage of legislation such as the UK Locomotives Act 1865, which required many self-propelled vehicles on public roads to be preceded by a man on foot waving a red flag and blowing a horn. This effectively halted road auto development in the United Kingdom for most of the rest of the 19th century; inventors and engineers shifted their efforts to improvements in railway locomotives. The law was not repealed until 1896, although the need for the red flag was removed in 1878.

And eventually it gets to the point where research is so expensive and the returns are so little that no one in their right mind would invest in it, and smaller firms go bust while larger ones scale back considerably, or at least try to direct their AI research towards applications where it might actually be used commercially.

But this hasn't happened. There's big investment in self driving vehicles today, investment is increasing and it's the same with AI.

I conclude that self-driving vehicles were a proven, existing technology as of 2022. You can quibble that they don't meet exacting definitions of safety or are pervasive, yet were cars terribly safe or pervasive a few years after they were invented? They might not have been as convenient as a horse or a train either. Yet they were a real capability.

When I was driving across Indiana earlier last year I turned on the adaptive cruise control and lane keep assist on my 2016 Subaru just to see what it could do in a relatively flat, straight, low-traffic environment. What it did was damn near drive the car itself, at least on the highway. Replace lane keep assist with lane centering and you could probably climb into the back seat and take a nap. But it's certainly not autonomous in any meaningful sense; I can't tell it where to go, it won't stop at intersections, and even on the highway I doubt it could deal with a lane disappearing.

We can argue about where the exact line between "autonomous" and "non-autonomous" is, and we can argue about whether the holdup is due to technology or regulation, and we can argue about how pervasive a technology has to become before it's proven and existing. But when most people heard that autonomous vehicles would be available by 2020, what they had in mind was that you'd be able to buy a car that you could program to go to the grocery store and it would drive you there while you took a nap. If you shift the goalposts to mean that some cities in China (and the US, for that matter) would have fully driverless rideshares then no one would really care or feel like it was that big of an accomplishment compared to what was available in 2017. If you told people that this meant that Tesla would have a better autopilot feature but that you'd still have to be alert and behind the wheel at all times, it wouldn't seem that much different than what Tesla was already offering in 2017. Until some auto manufacturer is offering for consumer purchase a vehicle that can be driven without the driver paying attention in at least some situations, nobody is going to feel like AVs have truly arrived.

You admit to using a car from 2016, to disprove his statement about 2022 (why last year?). By your method, ChatBots and AI image generation are still so poorly performing, they're of interest only to academics.

I was simply making the point that if you move the goalposts enough, self-driving cars were available in 2106.

Some successful vehicles provided mass transit until a backlash against these large vehicles resulted in the passage of legislation such as the UK Locomotives Act 1865

Talk about world changing mistake. If the British Empire had 20 or 30 years headstart on motor vehicles - WWI may have been much more shorter, decisive and with less blood spilled.

I remember a later season Full House plotline where Jesse and Joey's side plot involved trying to build a self-driving car with help from Danny's mechanic (Hank?). At one point, Joey joked that it would be easier to teach a dog to talk than teaching a car to drive. Sure enough a few episodes later, Comet, the family dog, had a POV bit where he could talk (in the Garfield way). Of course, that just turned out the be a dream or imagination sequence of Michelle's.

Unfortunately, the actor who played Hank (I forget who) left the show and the plotline was kind of dropped and I think that's when Joey and Jesse went on to host a radio show. Point being, Full House, had a pretty good perspective on the challenges of AV and AI a few decades ahead of their time, albeit with a few forgivable misses and some comedic exaggeration. Possibly apocryphal, but "Whatever happened to predictability?" was supposedly meta-commentary on this whole idea.

Since no one else is biting, I'll let you know I found this comment amusing. Thank you for making it

Partially on topic: https://zoox.com/

That's right: Amazon wants a taste of self driving vehicles. And to their credit I think this is the best approach. It's a way to dodge the fentanyl zombies who infest local busses and trains, not a personal car that you own.

And I've seen Google's truly self driving vehicles in person repeatedly. I'm surely not an expert, but Tesla's self driving might work a lot better if they slapped some LIDAR on it. Google and Amazon went big on LIDAR. Machine vision really doesn't get the job done as of today.

We might actually get real self driving in some limited areas. It's not going to go to 100% self driving, but in limited urban areas it will be more than zero in the near future.

Your "doesn't get the job done" link doesn't seem to go anywhere... I had to clip out everything past the "mediaplayer" portion of the URL to get to the video, where a tesla slams into a test dummy. But it doesn't take much work to find counterexamples, and this wouldn't be the first time someone fabricated a safety hazard for attention.

I don't think LIDAR is as big of a differentiator as tech press or popular analysis makes it out to be. It's very expensive (though getting cheaper), pretty frail (though getting more durable), and suffers from a lot of the same issues as machine vision (bad in bad weather, only tells you that landmarks have moved rather than telling you anything you can do with this info, false positive object identification). And this is trite, but remains a valid objection: human vision is sufficient to drive a car, so why do we need an additional, complex, fragile sensor operating on human-imperceptible bandwidth to supplement cameras operating in the same bandwidth as human eyes?

Tesla's ideological stance on machine vision seems to be: if camera-based machine vision is insufficient to tackle the problem, we should improve camera-based machine vision until it can tackle the problem. This is probably the right long-term call. If they figure out how to get the kind of performance expected from a self-driving system out of camera-based machine vision, not only have they instantly shaved a thousand bucks of specialty hardware off their BOM, arguably they've developed something far more valuable that can be slapped on all variety of autonomous machines and robotics. If the fundamental limitations are in the camera, they can use their demand in automotive as leverage to encourage major camera sensor manufacturers to innovate on areas where they currently struggle (high dynamic range, ruggedness, volume manufacturability). Meanwhile, there's a whole bunch of non-Tesla people working independently on many of the hard problems in the software side of machine vision; some of the required innovations in software don't necessarily need to come from Tesla. And if it does need to come from Tesla, they've put enough cameras and vehicle computing out in the wild by now that they could plausibly collect a massive corpus of training data and fine-tune it better than pretty much any other company outside of China.

Google, meanwhile, had years of headstart on Tesla, a few hundred billion dollars of computers, at least one lab (possibly several) at the forefront of machine vision research, extremely deep pockets to buy out tens of billions of dollars of competitors and collaborators, limited vulnerability to competitive pressure or failure in their core revenue stream, and a side business mapping the Earth compelling them to create a super-accurate landmark database for unrelated business ventures. I think the reason Google's self-driving vehicles work better than Tesla's is because Google held themselves to ludicrously high standards, half of which were for reasons unrelated to self-driving, and the likes of which are probably unattainable for more than a handful of tech megacorps. That they use LIDAR is immaterial - they've been using it since well before the system costs made commercial sense.

As for the rest of Tesla's competitors... when BigAutoCorp presents their risk management case to the government body authorizing the sale and usage of self-driving technology, it sounds a lot more convincing to say "cost is no obstacle to safety" as you strap a few thousand bucks of LIDAR to every machine and spend another few dozen engineering salaries every year on LIDAR R&D. A decade of pushing costs down has brought LIDAR to within an order of magnitude of the required threshold for consumer acceptance. I'll note that comparatively, camera costs were never an obstacle to Tesla's target pricing or market penetration. Solving problems with better hardware is fun, but solving problems with better software is scalable.

That's not to say Tesla's software is better though. I can't tell if Tesla's standards are lower than their competitors, or if their market penetration is large enough that they have a longer tail of publicized self-driving disasters to draw from, or if there's a fundamental category of objects their current cameras or software can't properly detect. Speaking from experience, I've seen autopilot get very confused by early-terminating lane markers, gaps in double yellow for left turns, etc. I think their software just kinda sucks. It's probably tough to identify the performance differences in good software with no LIDAR and bad software with LIDAR; comparatively much easier to identify bad software with no LIDAR. And really easy to blame the lack of LIDAR when you're the only people on Earth foregoing it.

The problem with the Tesla stance is that cameras (affordable ones, anyways) are still way behind human eyes -- it's not just dynamic range, the resolution/FOV tradeoff is extremely bad.

This guy estimates you would need ~576MP streaming at 30FPS with a FOV of 120 degrees to get close (actual FOV is more than that; depends how many cameras you want to have I guess). Such a system would be way more expensive than a LIDAR unit, safe to say -- especially if you expect to catch up with the 14 stop DR, which might not even be possible with current sensors.

Not sure what Tesla is using for resolution, but the extra acuity is surely not wasted in terms of picking out faraway objects and even figuring out roadlines -- this eats into the theoretical reaction-time advantage of AVs substantially.

~576MP streaming at 30FPS with a FOV of 120 degrees

This is not quite right. Eyes have a huge overall FOV, but the actual resolution of vision is a function of proximity to foveation angle, and there's only maybe a 5° cone of high-resolution visual acuity with the kind of detail being described. Just taking the proposed 120° cone and reducing it to 5° is more than a 99% reduction in equivalent megapixels required. And the falloff of visual acuity into peripheral vision is substantial. My napkin math with a second-order polynomial reduction in resolution as a function of horizontal viewing angle puts the actual requirements for megapixel-equivalent human-like visual "resolution" at maybe a tenth of the number derived by Clark. None of that is really helpful to understanding how to design a camera that beats the human eye at self-driving vision tasks though, because semiconductor process constraints make it extremely challenging to do anything other than homogenously spaced CCDs anyway.

On top of that, the "30FPS" discussion is mostly misguided, and I don't actually see that number anywhere in the text; I only see a suggestion that as the eye traverses the visual field, the traversal motion (Microsaccades? Deep FOV scan? No further clarity provided) fills in additional visual details. This sounds sort of like taking multiple rapid-fire images and post-processing them together into a higher-resolution version, something commercial cell phone cameras have done for a decade now. This part could also be an allusion to the brain backfilling off-focus visual details from memory. It's unclear what was meant.

especially if you expect to catch up with the 14 stop DR, which might not even be possible with current sensors.

This is already a solved problem, and has been for at least five years. Note that in five years, we've added 20dB dynamic range, 30dB scene dynamic range, bumped up the resolution by >6x (technically more like 4x at same framerate, but 60FPS was overkill anyway), and all that in a module cost that I can't explicitly disclose but I can guarantee you handily beats any LIDAR pricing outside of Wei Wang's Back Alley Shenzhen Specials. And it could still come down by a factor of 2 in the next few years, provided there's enough volume!

In any case, remember that the bet isn't beating the human eye at being a human eye, it's beating the human eye at being the cheap, ready-today vision apparatus for a vehicle. The whole exercise of comparing human eye performance to camera performance is, and has always been, an armchair philosopher debate. It turns out you don't need all the amazing features of the human visual system for the task of driving, this is sufficient but not necessary for a solution to the problem. You need a decent performance, volume-scalable, low-cost imaging apparatus strapped to a massive amount of decent performance, volume-scalable, low(ish)-cost post-processing hardware. It's a pretty safe bet that you can bring compute costs down over time, or increase your computational efficiency within the allocated budget over time. It's also a decent bet that the smartphone industry, with annual camera volumes in the hundreds of millions, is going to drive a lot of that camera manufacturing innovation you need, bringing the cost down to tens of dollars or better. Most of the image sensors are already integrating as much of the DSP on-die as possible, in a bid to free up the post-processing hardware to do more useful stuff, and that approach has a lot of room to grow in the wake of advanced packaging and multi-die assembly innovations in the last ten years. All the same major advances could eventually arrive for LIDAR, but it certainly didn't look that way in 2012, and even now in 2023 it still costs me a thousand bucks to kit out an automotive LIDAR because of all the highly specialized electromechanical structures and mounting hardware, money I could be using to buy a half-dozen high-quality camera modules per car...

As far as reaction time, real-time image classification fell to sub-frame processing time years ago, thanks in part to some crazy chonker GPUs available in the last few years. There's a dozen schemes for doing this on video, many in real-time. The real trouble now is chasing down the infinitely long tail of ways for any piece of the automotive vision sensing and processing pipeline to get confused, and weighing the software development and control loop time cost of straying from the computational happy path to deal with whatever they find.

This is also why I think Tesla's software just sucks. It's not the camera hardware that's the problem any more, and the camera hardware is still getting better. There's just no way not to suck when the competition is literally a trillion-dollar gigalith of the AI industry that optimized for avoiding bad PR and waited an extra four years to release a San Francisco-only taxi service. Maybe if Google was willing to stomach a hundred angry hit pieces every time a Waymo ran into a wall with the word "tunnel" spray-painted on it, we'd have three million Waymos worldwide to usher in a driverless future early. I doubt Amazon has any such inhibitions, so I guess we'll find out soon just how much LIDAR helps cover for bad software.

I'm not yet sold on the economics of autonomous ridesharing. The only reason to favor autonomous over a driver is cost, and I think most cost projections leave out the fact that fares will have to account for the massive amounts of R&D that will precede any viable service, plus the added expense of the vehicles themselves. There's also the fact that people tend to behave better when they're in the presence of a driver; you may not need to pay someone to drive, but you'll certainly have to pay someone to clean out the vehicles every day. Not to mention that if a drunk guy tells a LYft driver that he's about to throw up you know damn well that the driver will be pulling over immediately. The autonomous vehicle would probably have a "request stop" button that would drive around looking for a safe place, by which time it may be too late. Then there are the people who think that car ownership will be replace by these things, but that argument will have to wait for another day.

Well, R&D is a sunk cost. It might increase the average cost but once developed the marginal cost may be low.

All good points, but, as a hardware developer I advocate more hardware based solutions. And I'm pretty sure Amazon has multiple internal cameras monitoring passengers. Vomiting and dumping trash won't be free on this ride.

That's the obvious stuff that would be pretty easy to police given the bad-faith nature of it. I'm more concerned about normal people not trying to slop up the car but inadvertently do so anyway. Like the family that goes to the beach and gets sand everywhere because there isn't a driver there providing subtle pressure to make sure that everything is brushed off as good as possible. Or someone eating in the car who otherwise wouldn't because he didn't want to ask the driver if it was okay. Stuff like that where normally responsible people are a little bit more liberal with rental equipment when they're unsupervised.

Aren't NYC's Nissan van taxis designed to be easy to clean and hose out? I mean, the car will still have to come into the station to be cleaned out, but that's still a tractable problem.

Until it gets out that its disproportionately left-favored demographics being punished for the puking and fucking and shitting and property damage (conveniently ignoring that its also members of those demographics committing the offenses) and Amazon will have to capitulate on either enforcement or risk losing all those juicy federal contracts.

People shit and use drugs on public transit because it is public.

Ubers and other ride shares can charge people large fees for vomiting or simply ban them for terrible behavior. Also the non-public nature and small financial barrier to access passively keeps out the fentanyl zombies and public shitters. I don't forsee them being forced to let such people use ride share or pod cars.

I wish that site was easier to read. It's some sort of podcar?

Yes. The point being you use your phone to order a self driving car to take you somewhere around an urban area.

The fourth law of thermodynamics states that the first 90% of the project takes 90% of the time, and the last 10% of the project takes the other 90% of the time.

It blows my mind how so many people who really should know better are so eager to declare that they’ve reached the finish line, because “there are only a few minor problems left that will be resolved in a year or two”. Anyone who has delivered a non-trivial product to real users (particularly in software engineering) knows just how much time and effort gets eaten up during the long slog to 100%, and how many unforeseen problems can crop up.

You’ve reached the finish line when you’ve reached the finish line. You’ve reached the finish line when the product works as advertised and is being used for its intended purpose. Not before.

And yet I think with self driving and AI we should ask for good enough. I want rosie from the jetsons for my home and a car that can take me home safe enough on normal road conditions. If Rosie folds from time to time the family cat with the other laundry - I can live with that.

And yet I think with self driving and AI we should ask for good enough.

We most certainly should not.

AVs are a classic example of a technology where "good enough" isn't good enough. AVs need to be at least as safe as a top human driver; personally, I want them to be strongly superhuman in all driving conditions before I would consider getting into one. I'm not going to die because some tech CEO just thought it would be really cool to put his AV on public highways before it was actually ready. I don't know why this would even be up for debate.

In general I'm sick of "good enough". I want correct. I want quality. From the industrial revolution up until the present day, automation has frequently been accompanied by a decline in the quality of goods and services (not to mention a decline in working conditions - the Luddites weren't anti-technology per se, they were anti- working 12 hour factory shifts for a pittance). Has customer service gotten better because of automation? "Let me talk to a damn human" has become a common refrain. You know, you have writers being told "we know you're a lot better than GPT, but we're going with GPT anyway because it's cheap and good enough". This should not be acceptable. I mean on a broad cultural level, it should be baked into the fabric of people's attitudes that this is not acceptable.

I am an unabashed supporter of the MIT school of design over the New Jersey school. I think this is part of the reason why LLMs just irritate me on a fundamental level - it's because they got so far, so fast, using only bull-headed statistical methods that required no prior theoretical understanding of the structure of language, and make no guarantees about the robustness or the correctness of the underlying system. It feels like cheating, basically. I understand Chomsky's chagrin well. It feels like something very important to you, something you care deeply about, is being rejected by the laws of nature.

With each passing year I find more and more reason to be sympathetic to the Marxist position. If this is capitalism, then capitalism sucks.

I'm not going to die because some tech CEO just thought it would be really cool to put his AV on public highways before it was actually ready. I don't know why this would even be up for debate.

No, you're going to die because of a preference to die under the wheels of a pothead scrolling their phone, a drunk that doesn't like cyclists, or just regular old human error instead. Of course, you probably won't wind up dying in any of these ways, but the obvious question is which policy gives you lower odds of dying rather than whether a system can prevent all errors under all circumstances.

Has customer service gotten better because of automation? "Let me talk to a damn human" has become a common refrain.

In my experience, yes, it has. I can frequently resolve problems using automated systems that are easier and faster to use than working with humans. For example, if my flight can't take off due to weather and I'm going to miss my connection, I can hop on my phone and the system will have suggested options for me to switch to; I can see the options myself and select which one is best for my travel preferences. A human could do that too, but they have to do it for every individual trying to make that connection, which results in a big line. If I have anything weird that I'd prefer, I have to explain it rather than just pressing the buttons.

On the flip side of things, my experience with "let me talk to a damn human" is that it's usually someone looking for some sort of special favor that's outside of policy and may or may not be possible. I have repeatedly experienced this mentality from people that I've worked with in customer service that really want to get on the phone to explain their situation, when I actually don't care what their reasons are and I'm just going to reiterate that the policy is what it is, I will not be giving you anything free or making exceptions because you really want them.

So yeah, on net, I would consider automated solutions to have sharply improved my experience as both a customer and service provider.

Will they jail the reckless CEO the same way as a reckless pothead driver when their car kills someone (or 10/100/1000 someones)?

The long and short of it with AVs is that driving is one of my main sources of relaxation and pleasure, and so I'll only give up my manual car when you pry it out of my cold dead hands. I could take a more "objective" view of the minutiae of AV policy if I was pressed to. But I'm not particularly interested in doing so.

my experience with "let me talk to a damn human" is that it's usually someone looking for some sort of special favor that's outside of policy and may or may not be possible.

I was definitely subconsciously reaching for this notion when I brought up customer service. Thank you for articulating it for me.

Society should be set up in such a way that special favors are possible. It should be malleable, pliable, it should admit of edge cases and exceptions. It should be conceivable that you can lean on a human's empathy or frailty or inadequacies in order to get things done. That's what a human society looks like.

To be sure, the failure modes of such a style of operation when taken to excess are well understood. But it's still preferable to the alternative of a fully mechanized and perfectly efficient society, with all its cold digital exactitude. I don't want every public and private institution to operate like Google and Youtube - no way to talk to a human, no edges or seams, solid, impenetrable, immovable. Do you want your employer to operate that way? Or the criminal justice system? Will JudgeGPT be susceptible to the eleven magic words? Let's hope he has a particularly wise philosopher-king in charge of his alignment and RLHF.

You want it both ways then man. You can't have a society that does things correctly instead of good enough but is also malleable enough to allow for people to make special accommodations for you - special accommodations are not correct. I think what you are asking for is something a lot of people want - it's definitely something I'd like - a return to the society of the past where things were basically correct, but the human factor meant a savvy operator could extract accommodations, because you are a savvy operator. But we can't have that, because we already taught everyone to be a savvy operator. We poisoned the well ourselves with our savvy operations.

You want it both ways then man. ... special accommodations are not correct.

Au contraire.

A computer system is not a work of art; a work of art is not a human individual; and a human individual is not society as a whole. Things that are distinct should be judged by their own distinct standards that are proper to them. A standard of correctness that applies to one type of thing may not apply to another type; indeed, the entire notion of correctness may be appropriate to one category but actively detrimental to another.

Not that I have any particular qualms about contradicting myself anyway. Contradiction bears witness to the life of thought.

But we can't have that

I want what I want, based on my judgment of what is good and proper. It's no skin off my back if I "can't have it".

Sorry, I didn't explain myself well enough, I'll try again.

I'm not going to die because some tech CEO just thought it would be really cool to put his AV on public highways before it was actually ready. I don't know why this would even be up for debate.

It's up for debate because of your contradictions. The tech ceo who thinks being cool is more important than superhumanly safe got there because he's a savvy operator. Savvy operating is cool. He doesn't exist in the world where you can't get special exceptions, because in that world he's not a ceo, he's in prison for fraud or malpractice or the like. But in a world where you can game human foibles to your own advantage, every human foible will be gamed.

I want what I want, based on my judgment of what is good and proper. It's no skin off my back if I "can't have it".

I don't understand what you mean, if you can't have what you want isn't that exactly skin off your back? Or is it just the wanting that gets your motor going?

I don't want every public and private institution to operate like Google and Youtube - no way to talk to a human, no edges or seams, solid, impenetrable, immovable.

They operate that way because of some SCOTUS decisions on mandatory arbitrage. In EU where those parts of the contracts are void - they are a bit more responding.

In general I'm sick of "good enough". I want correct. I want quality.

Not available. You can get more quality, at the cost of money, effort, and time. But returns diminish very quickly, and you never, ever, reach perfection. And you lock in the deficiencies of your earlier approaches.

With each passing year I find more and more reason to be sympathetic to the Marxist position. If this is capitalism, then capitalism sucks.

This is Marxism. It sucks worse.

I don’t know if things have been getting worse since the Industrial Revolution. There are many things that are great. But I do think it requires some idiosyncratic leaders. I’m typing this on an iPhone. It is a fine device. But one thing I’ve noticed is it doesn’t always “just work.” Sometimes simple functionality goes awry in a way it didn’t ten years ago. Yes, it has all sorts of cool other functionality but those extra bells and whistles don’t really make up for simple reliability. I’m probably mythologizing here a bit but it was different when Jobs was alive. One of his mantra’s was to make sure the whole process was simple for the end customer. Sure, it might be difficult to do as cool of things as you could on other computers etc, but the simple things just worked and the simple basic things is what most people need.

I don’t know if things have been getting worse since the Industrial Revolution.

Ooh! Ooh! (waves hand) I know!

But what if you aren't a dying child?

We'll all have our own subjective preferences, but I despised the iPhone when Jobs was alive. The level of locked-down rigidity was incredibly grating to me, the typing interface was clunky and unusable, and every time I grabbed one I found it to be a terrible product. I switched from Android to iPhone well after his death and now find the product pretty enjoyable. I don't really see the shortage of reliability, although there are certainly applications that just plain suck, I don't really view that as Apple's fault.

Yes! I was trying to get at this because I knew the phenomenon had a name but I couldn't think of what it was.

Wikipedia offers the name the ninety-ninety rule. I've never heard a specific name for it, although I've definitely heard it used more generally than just about programming projects.

there are only a few minor problems left that will be resolved in a year or two

I always fondly remember this (partly apocryphal) story told by my Physics teacher of Lord Kelvin's famous speech to the Royal Society in which he declared that we were on the cusp of completing a full understanding of the physical universe, except for two small clouds on the horizon: the relative motion of the ether with respect to massive objects, and Maxwell-Boltzmann's theorem on the equipartition of energy.

Of course these two issues would end up requiring the invention of special relativity and quantum mechanics to solve, and with them the undoing of classical physics as a whole and with it the sense of certainty physicists of 1900 had about their discoveries (but not Kelvin himself, despite often being quoted as such).

Very interestingly if you ask GPT about what those two clouds were you will get a vast variety of answers including references to UV-catastrophe and black body radiation despite him making no reference to those.

And that's because it's just a common misconception even scholars repeat uncritically.

AV's have to be much less likely to crash than the average driver, and if you have a person sitting in the car ready to handle errors then there's no productivity gains. If you have an AI writing ad copy, diagnosing illnesses, translating documents, or writing contracts and you have one person there checking for egregious errors that's still a productivity improvement over the status quo.

If we don't get superintelligence that's probably what AI ends up as, a productivity booster people with expertise can pass technical work off to.

The area I expect it will revolutionize is interactive fiction. Video games obviously will get better NPC's, but I think there's real potential for art where the creator engineers complex AI characters and lets them dynamically react to some set of initial conditions to produce a narrative.

Speaking of which, if you have seen the recently released The Kraken Wakes, which purports to do exactly this, with ai characters you just chat with to advance the plot and uncover the mystery, don't believe it. Forget chatgpt, they're barely even Tay level chatbots.

That said, I would love to play a version of Anchorhead with ai characters. I can't imagine how The Counterfeit Monkey would incorporate ai, but it's flawless already. The first game I'd try to incorporate ai into would be Morrowind, its dialogue system borrowed a lot from IF, and simplified it without dumbing it down to Oblivion levels.

The area I expect it will revolutionize is interactive fiction. Video games obviously will get better NPC's, but I think there's real potential for art where the creator engineers complex AI characters and lets them dynamically react to some set of initial conditions to produce a narrative.

This gets dangerously close to AGI. To make a believable AI NPC that is more than an information kiosk with a Korsakov syndrome you have to give the AI long-term memory and a personal agenda. The agenda will be limited to the virtual word at first, but it's an easily transferrable skill:

  • make an interactive murder mystery play in Second Life, win a Tony

  • make an interactive murder mystery play on Broadway using Boston Dynamics androids, win a Tony

  • chase a murderous android across Manhattan