site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

But if you press them on the topic, or actually look at chess games that GPT has played it becomes readily apparent that GPT makes a lot of stupid and occasionally outright illegal moves (eg moving rooks diagonally, attacking it's own pieces, etc...). What this demonstrates is that GPT does not "know how to play chess" at all.

Imagine a blind person, without any sense of touch or proprioception, who has only heard people talk about playing chess. They have never seen a chessboard, never picked up a rook, and the mere concept of moving pieces is completely foreign to their sensorium.

And yet, when pressed, said person is able to play mostly legal moves, all the while holding the entire state of the board in their head. Correspondence chess via Chinese telephone.

I think anyone who witnessed such a feat would be justified in being highly impressed, whereas you happen to be the equivalent of someone complaining that a talking dog isn't a big deal because it has an atrocious accent, whereas Yudkowsky et al are rightly pointing out that you can't find a better way of critiquing a talking dog! Especially a talking dog that gets ever more fluent with additional coaching, to the point that it knows more medicine than I do, understands quite complicated math, and in general does a better job of being a smart human than the average human does.

In a park people come across a man playing chess against a dog. They are astonished and say: "What a clever dog!" But the man protests: "No, no, he isn't that clever. I'm leading by three games to one!"

Do the dogs not speak wherever it is you are from?

Part of my point is that computer programs being able to play chess at or above a Human level has been the norm for close to 40 years now. I would argue that the apparent inability to match that capability is a step backwards

It's a step in a different direction, not backwards. First people programmed computers with "play chess, like this", and because they could do it faster they eventually got better than humans at chess. Then people programmed computers with "learn to play simulatable games well", and they soon got better than humans because chess is a simulatable game, and although they also got better than the first computers it wouldn't have been a failure otherwise because the point of the exercise was the generality and the learning. Now people have programmed computers with "learn to write anything that humans might write", and yes they're still kinda crummy at most of it, but everyone's dumbfounded anyway, not because this is the way to optimize a chess engine, but because it's astounding to even find a crummy chess engine emerge via the proposition "chess play is a subset of 'anything'".

Is this a dead end? The real world isn't as amenable to simulation as chess or go, after all, and LLMs are running low on unused training data. But with "computers can learn to do some things better than humans" and "computers can learn to do practically anything" demonstrated, "computers can learn to do practically anything better than humans" should at least be imaginable at this point. Chess isn't a primary goal here, it's a benchmark. If they actually tried to make an LLM good at chess they'd be able to easily but that would just be Goodharting themselves out of data. It will be much more interesting when the advances in GPT-5 or 6 or whenever make it a better chess player than humans incidentally.

It's the claim that "computers can learn to do practically anything" has already been demonstrated that I am calling into question.

If nobody has made a stockfish ChatGPT plugin yet I am sure it is only a matter of a few days. People are impressed by ChatGPT playing kinda okayish chess without making use of external tools, depite the fact that even amateur chess players can run circles around it, for the same reason they're impressed with Usain Bolt running 100m in 9.58 seconds despite the fact that a scrawny teenager who gets out of breath when they get up off the couch could go 100m in less than half the time on a Kawasaki Ninja.

There's around 0 dollars to be made by making a chess bot better than Stockfish. The days of rolling them out to spank the human pros is long gone, they just get up and start running for the hills when you pull out even the kind of bot that runs on an old smartphone.

In contrast, an AI that does tasks ~80% as good as a professional can, for pretty much all tasks that involve text, is economic disruption in a tin can. (Emphasis on professionals, because it is genuinely better than the average human at most things, because the average human is an utter humpty)

Notice how I said that it's a better doctor than me? Consider how much we spend on healthcare, one of the thousands of industries about to be utterly disrupted.

In contrast, an AI that does tasks ~80% as good as a professional can, for pretty much all tasks that involve text, is economic disruption in a tin can

But the difference is still in the tails. The top 1% is where the money is made in any competitive industry. That is why top tech companies are so obsessed with talent and recruiting. That is harder to automate than the rest.

Notice how I said that it's a better doctor than me? Consider how much we spend on healthcare, one of the thousands of industries about to be utterly disrupted.

It can automate the diagnosis process based on some input of symptoms, but other parts harder, like treating. Same for invasive tests and biopsies. Ai will disrupt it in some ways, but I don't think it will lower costs much.

I think you're adopting too much from a programming background when it comes to productivity. 10x programmers are far more common than 10x doctors or lawyers, because it isn't nearly as feasibly to simply automate the gruntwork without hiring more junior docs/lawyers.

I would say that productivity in the vast majority of professions is more along the lines of the Pareto Principle, such that a 80% competent agent can capture a substantial chunk of profits.

And what exactly is so hard about treatment? An AI doctor can write drug charts and have a human nurse dispense them. Invasive tests and biopsies are still further away, but I full believe that the workload of a modal doctor in say, Internal Medicine, can be fully automated today without any drawbacks. The primary bulwark against the tide is simply regulatory inertia and reflexive fear of such unproven advances.

Is there a good AI substitute for clinical examinations at present, or are we going to rely on patients self-examining?

I can honestly buy that in the short-medium term AI would take a better history and get differentials and suggest treatment plans better than the modal doctor. I could even buy that within that timeframe you could train AI to do the inspection parts/things like asterixis, but I don’t know how you’d get an AI to…palpate. Movement and sensation etc. are quite difficult for computers, I am to understand.

Alternatively maybe they’d just get so fucking good at the rest of it that professional examinations aren’t needed anymore, or that some examination findings can be deduced through other visual/etc means…

You'd be rather surprised at how little doctors palpate, auscult etc in practise. They're most used for screening, if there's any notable abnormality they get sent off straight to imagining instead of simply relying on clinical signs as was once common. It certainly plays a role, but with robots with touch sensors, it's hardly impossible to have AI palpate, it's just a skill that's rapidly becoming outmoded.

Oh I know well how doctors don’t do the things they teach us to do in medical school! But it did seem like one thing that they can’t (that easily) but we can (relatively easily), due to it being more of a physical and tactile thing.

That said, I find that I do examine people at least a few times a day.

I agree it’s hardly impossible but I’d be surprised if it wasn’t markedly harder to train?

I didn't realize you were a doctor too, or I'd have elaborated further! For example, humans have fine touch, pressure and proprioception right? That's how we feel a lump below all the subcutaneous tissue.

My Google-fu has failed me, and I can't find the video in question, but over a year or two ago, I saw a demonstration of a robot that's learned to do the same, identify and outline objects through pressure alone without visual imaging.

They took a hard object and embedded it within gel that had the same consistency as human tissue, and then the robot used pressure sensors to accurately identify the foreign object without directly touching or visualizing it.

The only reason we don't see that being done in clinical practise or in robotic surgery is because humans can do it themselves, or because by the time someone ends up on the operating table you don't need to palpate at all anymore. It's not an insurmountable problem!

More comments

It's harder to train in the sense that there's less data for grounding. On the other hand, we can cheaply make robot fingertips with superhuman tactile resolution, and if anyone bothered, it'd be easy to train a model (riding on top of some multimodal LM, probably?) on general tactile recognition in reality and simulation, and then finetune it in the clinical setting. This isn't very different from how humans are trained. How many hours of palpation did you do in your life? It's a minor addition to your general manual skill. And even if sample efficiency turns out to be abysmal in comparison, two hundred hands at $1000 a pop, over a year, do not amount to even one American GP's compensation. Granted, proper hands are for now much more expensive, mostly due to small-scale production (which in turn is explained by worthless software), but I expect this to be solved rapidly once Tesla Optimus, 1X and other robots enter the market.

Actually sounds like a cool project for the developing world (@self_made_human, what do you think?). Might even increase the diagnostic value of tactile assessment. Too bad we can't have nice things.

More comments

If GPT hallucinates a wrong and deadly treatment, who do you sue for malpractice?

Right now? Nobody, because it's not licensed for medical use and uses massive disclaimers.

In the future when regulators catch up and it's commercially deployed and advertised to that end? Whoever ends up with the liability, most likely the institution operating it.

I see this as a comparatively minor roadblock in the first place.