This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Dr. GPT will see you now
I've often noted around these parts that GPT-4 is a really good clinician. It would be hard for it not to be one, I'd be bloody amazing too if I had quasi-eidetic memory of most relevant diagnostic criteria and guidelines. That being said, just the guidelines alone are next to useless, you need to have the crystalline intelligence to apply it in a specific context.
I'd go so far as to say that it's a better, more competent doctor than the majority of human ones, even if it isn't outright superhuman (I've seen some ridiculously good human ones). I read with some humor an article about how it was tested by some bigshot hospital in the US (Harvard Med?) as a clinical adjunct, but by the end of it, the Clinical Director was praising its bedside manner, in the sense that it was kind, compassionate, patient and understanding, and outright suggested human doctors try and emulate that(!).
However, I have one resounding advantage over it namely that thanks to decades of regulatory bullshit, I have a certificate saying I can legally prescribe meds and it can't. L Bozo 💅, as the kids these days say haha.
Still, it would be remiss of me not to make use of it to improve my own skills while that's still relevant, so of late, I've tried using it to hone my own clinical skills. I have it write clinical vignettes that I solve, and then it grades me on my performance.
So far, it's done an excellent job, and funnily enough, I feel a tingling of pride when it says I did a good job. Why wouldn't I? It's a smarter and more decent entity than most humans I've had the displeasure of meeting.
Here's an example:
My response:
Assessment:
Honestly, as a pedagogical tool for medical education, it's unparalleled. I paid a great deal less attention in med school than I should have, from the month of anatomy classes I can't remember because I was getting frisky under the bench, to the period where I was severely depressed and barely dragging myself to class. I used to be very insecure about my knowledge base, but I did pull together and single-handedly teach myself most of the things I'd glossed over while preparing for the PLAB.
I've tried a similar method on GPT-3.5, and it was inadequate to the task. It made a lot more errors, and ended up confused often enough to be annoying. Till date, I haven't seen 4 fumble the bag once. I'd put their competency around the marks of a decent final year student versus a competent postgraduate resident
Note - GPT4 is handicapped when it comes to performing on 'exam recall' style questions. It still performs admirably, but 2 changes will immediately make it perform better with no extra "innovation" required.
1. Using medical reports in the pre-training dataset.
Presently, most medical reports are privately held, and not available on the open internet. Unlike law, where a majority of cases have at least semi-redacted public documents, medical documents are not available to GPT-4 for training. GPT-4 went from 40th percentile to 88th percentile on the
LSATBAR, by simply adding legal data to the pre-training set. This would be equally easy to do with medicine if HIPAA didn't exist.2. Retrieval + generate instead of next-word-prediction
GPT models have no sense of truth. This means, that they will confidently blabber about anything you throw at them, even if it is complete lies. Second, even if the model has read everything on the internet, it does not mean it can recall which bits it should be reading with ease. Retrieval style work workflows allow the model to first go search for the right answer or correct reference document on the internet. It looks for the page with the answer on it. Then, it uses its 1600 SAT verbal IQ to interpret it in a manner that GPT-4 is famous for.
Both are relatively easy (as in doesn't need any major innovation to do) changes, and should immediately make the lives of every doctor a LOT LOT better. Y'all are doing 80 hr weeks anyway. Maybe this will help secure so WLB. Ofc, it won't actually happen because Doctors are Evil.
Um, the LSAT isn't about legal stuff. It's designed to be taken by students before they enter law school. It's just about logical reasoning and interpreting arguements.
Sorry, I meant the BAR exam
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link