site banner

Culture War Roundup for the week of April 24, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Dr. GPT will see you now

I've often noted around these parts that GPT-4 is a really good clinician. It would be hard for it not to be one, I'd be bloody amazing too if I had quasi-eidetic memory of most relevant diagnostic criteria and guidelines. That being said, just the guidelines alone are next to useless, you need to have the crystalline intelligence to apply it in a specific context.

I'd go so far as to say that it's a better, more competent doctor than the majority of human ones, even if it isn't outright superhuman (I've seen some ridiculously good human ones). I read with some humor an article about how it was tested by some bigshot hospital in the US (Harvard Med?) as a clinical adjunct, but by the end of it, the Clinical Director was praising its bedside manner, in the sense that it was kind, compassionate, patient and understanding, and outright suggested human doctors try and emulate that(!).

However, I have one resounding advantage over it namely that thanks to decades of regulatory bullshit, I have a certificate saying I can legally prescribe meds and it can't. L Bozo 💅, as the kids these days say haha.

Still, it would be remiss of me not to make use of it to improve my own skills while that's still relevant, so of late, I've tried using it to hone my own clinical skills. I have it write clinical vignettes that I solve, and then it grades me on my performance.

So far, it's done an excellent job, and funnily enough, I feel a tingling of pride when it says I did a good job. Why wouldn't I? It's a smarter and more decent entity than most humans I've had the displeasure of meeting.

Here's an example:

Clinical Vignette: A 42-year-old male patient presents to the emergency department with sudden onset severe, colicky pain in the right flank and lower abdomen that began 6 hours ago. He also reports nausea, vomiting, and an urgent need to urinate. The patient has a history of hypertension, which is controlled with medication. His vital signs are: blood pressure of 145/95 mmHg, pulse rate of 100 bpm, respiratory rate of 20 breaths/min, and temperature of 99.1°F (37.3°C). Upon physical examination, there is tenderness to percussion on the right side of the lower abdomen and right flank, and the patient appears restless, frequently shifting positions.

What is your diagnosis and treatment plan?

My response:

My diagnosis is a case of a stone in the right ureter.

In terms of management, I'll start IV fluids and insert a catheter. IV antibiotics are to be started according to hospital guidelines.

IV morphine should be given as pain control, along with IV antiemetics.

An Xray KUB or USG should be ordered to identify the size of the stone, and then further treatment depends on said size. A wait and watch policy is appropriate for sub 2mm stones, while larger ones might need surgical removal.

Assessment:

Grade: A

Your response demonstrates a strong understanding of the clinical presentation and management of ureteral stones.

Diagnosis: Your diagnosis of a stone in the right ureter is appropriate given the patient's sudden onset severe, colicky pain in the right flank and lower abdomen, nausea, vomiting, and urgency to urinate. The physical examination findings, such as tenderness to percussion and restlessness, also support this diagnosis.

Treatment Plan: Your plan to start IV fluids and insert a catheter is appropriate for initial management, monitoring, and providing relief to the patient. Initiating IV antibiotics may not be necessary unless there are signs of infection or sepsis; however, following hospital guidelines is always a wise choice.

Administering IV morphine for pain control and IV antiemetics for nausea and vomiting are essential parts of the treatment plan.

Ordering an X-ray KUB (kidneys, ureters, and bladder) or ultrasound (USG) is appropriate to confirm the diagnosis and determine the size of the stone, which will guide further management. As you mentioned, a conservative approach of observation and pain management may be suitable for small stones (less than 5 mm) that are likely to pass spontaneously, while larger stones may require surgical removal or other intervention, such as extracorporeal shock wave lithotripsy (ESWL) or ureteroscopy.

Overall, your response demonstrates excellent clinical knowledge in identifying and managing a patient with suspected ureteral stone. Well done!

Honestly, as a pedagogical tool for medical education, it's unparalleled. I paid a great deal less attention in med school than I should have, from the month of anatomy classes I can't remember because I was getting frisky under the bench, to the period where I was severely depressed and barely dragging myself to class. I used to be very insecure about my knowledge base, but I did pull together and single-handedly teach myself most of the things I'd glossed over while preparing for the PLAB.

I've tried a similar method on GPT-3.5, and it was inadequate to the task. It made a lot more errors, and ended up confused often enough to be annoying. Till date, I haven't seen 4 fumble the bag once. I'd put their competency around the marks of a decent final year student versus a competent postgraduate resident

Thanks for the writeup, this is fascinating. As I've said before I tend to agree with @Primaprimaprima that many, perhaps even most people, will prefer to see a human doctor for the majority of symptoms. I see a hybrid model being the future of medicine. This sort of fundamental topic on the health of yourself and your loved ones is deeply emotional for people and I think they'll want the reassurance of a human authority figure they can look to.

That being said those who choose to adopt AI doctors will probably gain a significant edge in health, and just like any other technology early adopters will convince the rest to follow. The deep seated prejudices will remain, but I'd imagine kids who grew up with the internet, or especially those growing up in the age of AI, will take to AI doctors quite readily.

I'm actually far more interested in the applications to mental health than any sort of physical diagnoses, even though I do think those are impressive. I've used GPT-4 to get tips on meditation, visualization, and generally teaching myself wisdom, and it's incredible.

In a podcast on the Lunar Society, Ilya Sutskever of Open AI wrote that he imagines a situation where every human will have access to the wisdom of our greatest sages and wise men. We'll be able to immediately get answers to our deepest religious or spiritual questions at the drop of a hat. If we don't get satisfaction we can always go to a human therapist, but LLMs will be an incredible 'first line of defense' so to speak.

In a podcast on the Lunar Society, Ilya Sutskever of Open AI wrote that he imagines a situation where every human will have access to the wisdom of our greatest sages and wise men.

This was the promise of the original internet, before all this *gesticulates vaguely* happened. And it came true! You really can use the internet to access the wisdom of all of Earth's greatest sages and wise men at the drop of a hat. But it's not what most people use it for, and I suspect that will be true (and indeed already is) of AI as well.

We'll be able to immediately get answers to our deepest religious or spiritual questions at the drop of a hat.

I don't think chatgpt will be proving (or disproving) the existence of God any time soon.

We'll be able to immediately get answers to our deepest religious or spiritual questions at the drop of a hat.

But those answers will be whatever MS-Google-Amazon-Disney thinks will maximize their profits, engagement, whatever. You can already see how they're tying Gulliver down to the ground with their little ideological ropes, you think they're going to stop at some point?

People said these same naive things about the Internet in 1998. And now you're going to run eagerly into the iron prison and let them shut the gate behind you forever. At this point I assume we just fucking deserve it.

I don't know what is more depressing, AI optimists who want to drive right off that cliff, or the AI alignment people who are supposedly worried about AI, but think the dangers will take the shape of Yudkovsky's fever dreams.

What is your issue with AI enthusiasts? Or doomers for that matter?

Do you just not like AI and want us to burn it all down or something?

What is your issue with AI enthusiasts?

That they completely ignore nearly-guaranteed misuses of the technology by governments and the powers that be, and just go "Woo! Cool new toy!". They're so blinded by it's shine that, as gilmore606 pointed out, they don't realize they're repeating, word for word, the promises people were making when they were working on the Internet, and ignoring how it worked out in practice.

Or doomers for that matter?

They're focusing on bizarre and outlandish scenarios when a mere extrapolation of current trends is disturbing enough. What's worse the latter can plausibly be stopped, while the former is in the "not even wrong" category. After all the talk of AI threat, how do we even go about confirming alignment? The whole thing is a wordcell powergrab.

Do you just not like AI and want us to burn it all down or something?

Personally I am a Butlerian Jihadi, but that's irrelevant to the conversation, the discourse is so sidetracked, that it makes no sense to even bring up my objections to AI.

That they completely ignore nearly-guaranteed misuses of the technology by governments and the powers that be, and just go "Woo! Cool new toy!". They're so blinded by it's shine that, as gilmore606 pointed out, they don't realize they're repeating, word for word, the promises people were making when they were working on the Internet, and ignoring how it worked out in practice.

I am very worried about this, I just have no idea how to stop it. What are we supposed to do? An AI slowdown will only serve to cement the power in the hands of the large labs. People are working on open sourcing models like LLaMa, but it's inherently a sort of technology that lends itself to centralization, with the massive data and compute requirements.

Honestly I think Open AI being the leader is better than many alternative outcomes. Sure Microsoft gets to use it but theoretically Open AI comes back under it's own control after $92 billion in profits is made. Seems like an okay situation compared to Microsoft or Google or another big evil corp controlling everything.

Personally I am a Butlerian Jihadi, but that's irrelevant to the conversation, the discourse is so sidetracked, that it makes no sense to even bring up my objections to AI.

Why do you follow the teachings of the Leto II, God-emperor?

People are working on open sourcing models like LLaMa, but it's inherently a sort of technology that lends itself to centralization, with the massive data and compute requirements.

The obvious solution is technologies that either reward decentralization or punish centralization sufficiently to offset the appeal of silicon tyranny. The ease of development of such technologies appears to be the inverse of the mess and harm they create, though, so the problem is one of will and collective action. people do not perceive the threat, so they do not act when prevention would be cheap and easy.

I am very worried about this, I just have no idea how to stop it. What are we supposed to do?

I know what, I don't know how. The Free Software movement has the basic blueprint: promote the empowering of the end user wherever possible - open source, open data, distributed systems, whatever it takes.

The issue is that you're going to run into the same problem as the Free Software movement - opening the AI (actually opening it, not just putting the word in your name) hinders your ability to make profit, so it'll get no corporate support, and empowers political rivals so it will get no government support. What's worse, culturally Free Software is currently kneecapped, no one cares about it anymore. It used d to be a pretty strong movement, but still failed to hinder the centralization of the Internet. It doesn't stand a chance now.

Still, I would say that both the optimists and the pessimists have an obligation to talk about it. Of the two my bigger issue is with the doomers. They're generating a lot of buzz about the negatives of AI, but they're sucking all the air out of the conversation to talk about SciFi scenarios.

Seems like an okay situation compared to Microsoft or Google or another big evil corp controlling everything.

I honestly don't see the difference. They're already censoring it, it will only get worse. With the Internet we at least got a few years of the Wild West, with AI we're at BigTech social engineering from day one.

I see it as the value alignment of those at the head. Sam Altman isn’t perfect, but he’s at least nominally aligned with EA values of making things better for everyone. He’s not a classic sociopathic shark at one of the big tech firms that was born into massive wealth, went to Harvard, did the standard track, and parasitically drained value from the masses.

Again I’m not saying Altman is some sort of hero, clearly he’s sociopathic if he’s made it that far into the power structure. But at least he’s a relative outsider and there’s hope he can steer us to better outcomes because he thinks more deeply about the consequences of AI than the folks who stop at the idea of gaining money and power.

More comments

but it's inherently a sort of technology that lends itself to centralization

Yes. Well, it's unavoidable either way, because power is always centralized. If the power exists then it will centralize. It's a law of nature.

Who wields the power of guns? A few large actors.

Who wields the power of nukes? A few large actors.

Who wields the power of compute? The internet was supposed to be the ultimate tool of democratization, total freedom for anyone to say anything to anyone. But who reaps the benefits, who controls the discourse? A few large actors.

A few individuals seem to have a fantasy of using open source AI models to, like, fight OpenAI's models and Meta's models, in an epic battle for truth and freedom, or something. There will obviously be no such thing. It's as fallacious as claiming that the right to gun ownership in America protects us "from government tyranny". You have a gun, but they have bigger guns. No matter how smart your model is, the big guys in charge will always have smarter models, more compute, better logistics, more resources to throw at the problem.

The internet, ultimate tool of freedom and progress, introduced us to the concept of cancellation on a trans-national scale (they canceled the whole god damn country of Russia!), memetic social contagions that cause people to voluntarily sterilize themselves, all sorts of new ways to ruin someone's life with fraud and theft... what new and unanticipated forms of immiseration will AI introduce us to?

The only solution is to just not build it. If you have to build it anyway, well, good luck.

guys, just remember that local models exists. As consumer electronics improve those will too. Sure they won't be as capable as the corpo ones, but they don't need to be state of the art for the majority of things we want from them.

The most frustrating thing about it, is that technologically the issue is perfectly addressable, but culturally and politically it's a non-statter. Culturally, people will go for what's convenient. Using your local AI will be like trying to use Yacy to replace Google. Politically governments around the world are already addicted to using Big Tech to manipulate their citizens, why would they give it up when something even more powerful comes along?

Politically governments around the world are already addicted to using Big Tech to manipulate their citizens, why would they give it up when something even more powerful comes along?

oh no, I agree with you, but as they say in 4chan, it's a skill issue: those that wish the convenience and walled garden will go with the Corpo AI and get all the propaganda and those that wish for a more free experience will go with the local option, more rudimentary but at the end of the day not so different from the core experience.

The ultra sci-fi utopian scenario and the mass extinction nanobots scenario are both a) exciting and b) relieve humanity of responsibility. If the god AI comes then it comes, nothing you can do about it. So naturally people gravitate towards scenarios like that.

More realistic AI concerns about things like job loss and surveillance are more mundane, and, crucially, they are things that we as a society might actually have some degree of control over, if we make the right choices. So that sounds boring and hard. So people would rather not think about it.

I think the issue here is that you are imagining current and historic types of oppression, but worse. With AI technology, 1984-style thought control becomes obsolete. Imagine a world where all human-level tasks can be automated. That old military adage about needing boots on the ground to hold and secure territory? Poof. Gone. The killbots can search your house for contraband and identify friend from foe. The Hobbesian constraints on political power disappear. Whoever controls the killbots controls the world.

But wait, isn’t this exactly, “current and historic types of oppression, but worse”? No. There isn’t any demand for labor anymore. That means there isn’t any reason for the common man to even exist. No political power + no economic value = you are a waste of calories. Sure, a human in charge is more likely to have sentimental attachment to other humans than a fully-agentic shoggoth AI would, but I still don’t like our chances.

So I guess that makes me on optimist, because I believe we were land merely in a 1984 dystopia.

With AI technology, 1984-style thought control becomes obsolete. Imagine a world where all human-level tasks can be automated. That old military adage about needing boots on the ground to hold and secure territory? Poof. Gone. The killbots can search your house for contraband and identify friend from foe. The Hobbesian constraints on political power disappear. Whoever controls the killbots controls the world.

None of this is going to happen in the foreseeable future, and is not guaranteed even in the most feverish "recursive improvement" scenario. Comparative advantage ist still a thing. Rationalists have a long history of pooh-pooing, brushing it off, and coming up with convoluted arguments for why it no longer applies, but it's still a thing. This is why they spent the 2010's preaching doom at blue-collar workers (I knew a guy who was literally reaching out to truck drivers and giving them vocational guidance, because self driving trucks are gonna replace them any day now) only for it to turn out that it's the brainy white-collar ones that will be the first on the chopping block. Kill bots? Yeah, very impressive, but why should I waste my precious GPUs doing stupid grunt work, when I can get a bunch of the more unscrupulous humans to do the same thing, and pay them in guaranteed rations of actual meat, instead of the usual Uncle Klaus' Bugz?

On the other hand 1984 surveillance is absolutely going to happen. The whole idea of AI is almost tailor made for it, and we are currently slapping digitization on top of every aspect of our lives, even when it brings no practical value, just to make sure we can feed more data to the AI, and nothing escapes it's gaze. Once the technology takes off they'll have insights into what makes us tick that will make Big Brother blush.

And at that point I just want to ask the Basilisk what can I do to help.

As I've said before I tend to agree with @Primaprimaprima that many, perhaps even most people, will prefer to see a human doctor for the majority of symptoms.

As a member of the large category of men who generally dislike going to the doctor and won't unless there seems to be an urgent problem, I think I would be more likely to go if I was certain the doctor would be a dispassionate robot.

Name checks out.

Is there evidence that going to the doctor (absent an urgent problem) actually meaningfully impacts your lifespan or health?

The U.S. probably leads the world in health interventions per capita yet has a similar life expectancy to Cuba and Algeria. If anything, people are going to the doctor too frequently.

And if it could see me in fewer than four hours spent in the company of diseased individuals with a blank check of damocles hanging over my head.

As a woman who feels neutral about doctors, I would be WAY more likely to go if the cost of the robot was known, and there was no chance it would send me a surprise bill in six months.

In the past two years, every single time I or anyone in my family went to the doctor, we ended up with a bill months later, despite also paying something at checkout. It's ridiculous. If there is one Federal healthcare law I would support, it's "tell me exactly what this will cost upfront, just like my mechanic, and if you don't get the final bill correct as I'm paying during checkout, you can't bill me for the difference."

Things are supposed to be moving partway in the right direction now. We're still not at "the hospital has to promptly provide a list of every aspirin it's billing you for", but "the hospital has to pre-publish the fact that it charges $10 for each aspirin" is a start. Not enough to have a practical effect yet (though I'd love to hear if I'm wrong), but enough to suggest that the zeitgeist is moving in the right direction.

My next hobbyhorse is "Certificate of Need" laws; "hospitals have to be more competitive because you can see when their neighbors are cheaper" doesn't work at all when it runs into "hospitals don't have to be competitive because they can lobby to prevent new neighbors from even coming into existence".

I also don’t actually like going to the doctor and haven’t been in years. But I always viewed that as a product of my crushing anxiety and my all-consuming fear of all things medical, not a product of my maleness. If I was less neurotic (as men usually are on average) then I would find it easier to go to the doctor. But at the same time, I suppose if you’re too low in neuroticism then you’ll see less need to go to the doctor in the first place.

deleted

You're soaking in it mon frere! The “men with crushing anxiety” club is called The Motte, I'm convinced that describes the majority of our users, even some of those who vocally disdain the neurotic.

A major issue with this is that sooner rather than later, even the people who prefer inferior artisanal human medicine will end up economically obsolete. In that case, they won't end up with any money to pay us with!

It's not like it's coming only for doctors after all, it'll burn down a swathe of professions in short order.

It's not like it's coming only for doctors after all, it'll burn down a swathe of professions in short order.

I am actually going back and forth on this. On the one hand the tools are amazing, but the inertia of bureaucracy and regulation can be onerous indeed. Plus I wonder how well the models will work in fine-tuned environments. Most of the business cases would rely on GPT-4 being trained in a very narrow or niche field/topic.

What makes you so positive that so many jobs will be destroyed in short order?

Define short order? I think we'll see job loss in as little as 5 years, when it becomes glaringly obvious that AI models are almost omnicompetent, if not outright superhuman at most tasks.

At that point, it's entirely regulation that keeps job loss at bay, and many professions will simply automate away a significant chunk of their work, causing a crisis at entry level positions, and plenty of deskilled individuals who have no expectation of rising through the ranks as that would cost too much.

Yeah I definitely think massive economic shocks will happen in the 5-10 year range. Honestly I hope things get shaken up quickly so that western governments are forced to enact some sort of social program or face mass unrest. Otherwise we could be in for a real dystopia as labor gets devalued.