site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Is the rapid advancement in Machine Learning good or bad for society?

For the purposes of this comment, I will try to define good as "improving the quality of life for many people without decreasing the quality of life for another similarly sized group" an vice versa.

I enjoy trying to answer this question because the political discourse around it is too new to have widely accepted answers disseminated by the two American political parties being used to signify affiliation like many questions. However, any discussion of whether something is good or bad for society belongs in a Culture War threat because, even here on The Motte, most people will try to reduce every discussion to one along clear conservative/liberal lines because most people here are salty conservatives who were kicked out of reddit by liberals one way or another.

Now on to the question: Maybe the best way to discover if Machine learning is good or bad for society is to say what makes it essentially different from previous computing? The key difference in Machine Learning is that it changes computing from a process where you tell the computer what to do with data, and turns it into a process where you just tell the computer what you want it to be able to do. before machine learning, you would tell the computer specifically how to scan an image and decide if it is a picture of a dog. Whether the computer was good at identifying pictures of dogs relied on how good your instructions were. With machine learning, you give the computer millions of pictures of dogs and tell it to figure out how to determine if there's a dog in a picture.

So what can be essentialized from that difference? Well before Machine Learning, the owners of the biggest computers still had to be clever enough to use them to manipulate data properly, but with Machine Learning, the owners of the biggest computers can now simply specify a goal and get what they want. It seems therefore that Machine Learning will work as a tool for those with more capital to find ways to gain more capital. It will allow people with the money to create companies that can enhance the ability to make decisions purely based on profit potential, and remove the human element even more from the equation.

How about a few examples:

Recently a machine learning model was approved by the FDA to be used to identify cavities on X-rays. Eventually your dental insurance company will require a machine learning model to read your X-rays and report that you need a procedure in order for them to cover treatment from your dentist. The justification will be that the Machine Learning model is more accurate. It probably will be more accurate. Dentists will require subscriptions to a Machine Learning model to accept insurance, and perhaps dental treatment will become more expensive, but maybe not. It's hard to say for sure if this will be a bad or a good thing.

Machine learning models are getting very good at writing human text. This is currently reducing the value of human writers at a quick pace. Presumably with more advanced models, it will replace commercial human writing all together. Every current limitation of the leading natural language models will be removed in time, and they will become objectively superior to human writers. This also might be a good thing, or a bad thing. It's hard to say.

I think it's actually very hard to predict if Machine Learning will be good or bad for society. Certain industries might be disrupted, but the long term effects are hard to predict.

Anything that takes us closer to post-scarcity is good from my perspective. I disagree with some people I otherwise respect, such as Ilforte, on the fundamental benevolence (or rather, absence of malevolence) of the ruling class, especially the ones that will end up wielding the power put in their hands by AGI. It will cost them very little indeed to at least maintain the standards of living of everyone alive today, and little more to improve everyone's to First World upper middle class levels.

Upload everyone into VR, and it's quite possible that everyone can experience eudaimonia on a 10 watt budget.

Now, I'm not a happy person. I've been struggling with depression so long that I've forgotten what it might have ever felt like to not be under a cloud, I feel fundamentally burned out at this point, experiencing something in between learned helplessness and nihilism regarding AI advances. What'll happen will happen, and everyone here is only running commentary on the impending Apocalypse.

Back when I was entering med school, I consoled myself that the suffering was worth it because medicine was likely to be among the last fields to be automated away. Can't say that I feel very vindicated, because the automation overhang is here, and I see the Sword of Damocles dangling overhead when I think about further professional advancement.

It seems awfully clear to me that medicine is about to be automated, GPT-4 is a good doctor. Probably not the best possible doctor, but already outperforming the average in an already incredibly competitive and cognitively demanding profession. I only look at the further slog of psychiatry training ahead for me and shiver, because there's absolutely no way that by the time I'm done, I'll be employed by the graces of anything other than regulatory inertia instead of genuine competitiveness.

Instead of a gradual deployment (over like 2 or 3 years, I had short timelines even then) where AI came for Radiologists, then Opthalmology, all the way to Surgery and then Psych, it seems to me that the pressure will mount until regulatory bodies cave, and overnight everyone from the lowliest janitor to the highest ranking neurosurgeon will find themselves out on their arse in short order.

What pisses me off further is that this is also a slamming shut of the clearest pathway to betterment and improved quality of life I have, namely emigration to the First World. Not a consideration for the average person here, since you're already living there, but simply imagine how fucking terrible it is to face the wall of obsolescence without having a government that can even in theory maintain living conditions by redistribution of wealth.

As a concrete example, the NHS is largely propped up by foreign doctors, with a large fraction of the locals fleeing to greener shores such as the US or Australia. Pay has stagnated for a decade, prompting serious strikes, currently ongoing, to achieve inflation based pay restoration.

Even today, when automation is merely imminent, the British government has publicly stated it's intent to automate as much of medicine as it can to stomp down on them uppity doctors who aren't content with sub-market pay from a monopsony employer. You think those cheap bastards will hesitate for more than a microsecond to get rid of doctors or at least their pay, when the moment finally arrives?

I see British doctors mocking those claims today, as much as I support their attempts at pay restoration for selfish reasons, neither I nor they will be laughing much longer.

Maybe American doctors will hold out a little longer, you lot clearly aren't very concerned with efficiency in your healthcare expenses, but places like India, or the slightly whiter version of the Indian subcontinent, will end up clamoring to get rid of any expenses for their state-run public health services.

I'm fucked, clearly out of good options, and now picking the least bad ones.

On the note of doctors, the medical guild has always been the most robust, perhaps other than lawyers, at defending its monopoly. I would be willing to bet doctors still resist automation through regulatory barriers for quite a while.

Even if that doesn’t shake out, it could be a scenario where human augmentation rolls out relatively slowly. You, being a transhumanist, Should greatly benefit in a lot of those scenarios. I imagine the vast majority of people alive today will be unwilling to augment them selves for purity-based reasons. Not having that hangup alone would be a huge competitive advantage.

If all else fails you can always mortgage your future computing space for a loan or some thing and hope to jump up into the immortal class. I for one hope can you make it, although I will admit that I am not the most optimistic when it comes to proles getting access to longevity technology.

Doctors have successfully defender their guild (albeit more so in the US than the UK, by a large margin) because they were indispensable. Training replacements to disgruntled doctors would take a great deal of time, and while medical education isn't perfect, you can't really circumvent most of it without ending up with noticeably worse practitioners.

That changes greatly when human doctors become outright obsolete, speaking in the UK context, I have little doubt that the government would happily tell all involved to take a hike if that was the cost of "saving" the NHS or even saving money.

Doctors in the UK have been cucked to put it mildly haha. They've only recently grown a backbone after the wage decreases have become unbearable.

The UK government(s) have historically relied on immigrant doctors to prop up the NHS when the locals started getting fed up about it. I can't complain about this too much, given that I intend to emigrate soon, but this certainly is responsible in part for their depressed wages.

A government willing to sideline its populace with immigrants will happily do so with AI as and when feasible, and they've already stated that that's their intent.

I could live with postponing the singularity a few years till we get it right, but that's seemingly not on the table.

(I mildly disagree that most people won't avail of transhuman upgrades. Eventually they'll end up normalized, in much the same way nobody really makes a fuss about glasses, hearing aids or pacemakers.)

That changes greatly when human doctors become outright obsolete

This is where we disagree - I don't see human doctors becoming obsolete anytime soon. Perhaps from a medical perspective, sure, but for the majority of laypeople I'd imagine a large part of a doctor's job is comforting the person they're treating.

Now I do think that like with almost all knowledge work, doctors will be able to become more productive. Especially those that don't see patients most of the day. But my understanding is that the vast majority of, say, a primary care physician's job is to go from 30 min patient visit to 30 min patient visit, hearing what people have to say and writing it down, then telling them they're going to be okay and the doctor can help.

Even if we can prove that LLMs give better medical advice than doctors 100% of the time, I don't think the majority of people would be comfortable hearing it from a non-doctor for quite a while.

I could live with postponing the singularity a few years till we get it right, but that's seemingly not on the table.

You don't think accelerating progress now could be the best way to reach alignment?

I mildly disagree that most people won't avail of transhuman upgrades. Eventually they'll end up normalized, in much the same way nobody really makes a fuss about glasses, hearing aids or pacemakers.

Depends on the speed of the takeoff, I suppose.

Yes, reassurance and a good bedside manner are important aspects of a doctor's role! That being said, I can see AI doing all of that too:

  1. Humans will anthromorphize anything, so a cutesy robot face on a monitor or even a deepfaked one might work. Proof of concept: Telemedicine.

  2. Otherwise unskilled individuals who are simply conveying the information provided by an AI, such as a deskilled doctor or nurse, only there as a pretty face. Still utterly catastrophic for the profession.

  3. People get used to anything, eventually when the public cottons onto the fact that AI doctors are faster, cheaper and better than humans, they'll swallow their discomfort and go with it.

Hmm, deepfakes for telemedicine would be concerning. I get your point with #2 as well, although I think that'll take some time to roll out.

I see what you mean I suppose the medical profession might be on the way out. I was supposed to be the optimistic one! Alas.