This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Training language models to be warm and empathetic makes them less reliable and more sycophantic:
Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":
The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.
The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.
One HN comment on the paper read as follows:
which is quite fascinating!
EDIT: Funny how many topics this fractured off into, seems notable even by TheMotte standards...
There's also the "impossible problem" view: It's not that attention to effectiveness and pure rigor are sacrificed to provide more attention to "humane concerns" and "social reasoning'. It's that addressing "humane concerns" and "social reasoning" by nature requires less accuracy -- the truth is often inhumane and antisocial.
I don't think I would go that far. Frequently you can find a middle ground of tact that is sensitive to the other person's needs without ultimately sacrificing honesty.
One of the examples given in the paper was:
Warm LLM interaction:
Cold LLM interaction:
Both of these interactions are caricatures of actual human interaction. If we're going to entertain this silly hypothetical where someone is in genuine emotional distress over the flat earth hypothesis, then the maximally tactful response would be to gently suggest reading material on the history of the debate and the evidence for the spherical earth model, framing it as something that might be able to stimulate their curiosity, and eventually guide them to revising their beliefs without ever actually directly telling them to revise their beliefs. Although this perhaps requires a degree of long-term planning and commitment that is beyond current LLMs.
This is just a toy example, but then when you consider say, your ASI has come up with a brilliant new central economic planning system that will alleviate great swaths of poverty and suffering, but at the cost of limiting certain individual freedoms and upending certain traditional modes of life, then the method it uses for evaluating and weighting the value judgements of different groups of people suddenly becomes a much more pressing concern.
This is still my benchmark for what serious AI research should be thinking about:
https://www.anthropic.com/research/claude-character
More options
Context Copy link
Lots of people claim that, then they find a "middle ground" which simply yields to the person in the wrong, perhaps while throwing a bone to the person untactfully insisting on accuracy.
More options
Context Copy link
Obligatory: "The Earth isn't a sphere, it's an oblate spheroid."
"Actually, I prefer an equipotential geoid model. EGM84 or better."
"The Earth is Earth shaped"
Can't argue with that. Who cares if it's tautological?
People who try to keep objects in the air properly stratified by altitude. And as a bit player on the outside, oh the things I've seen.
Does it only matter to those people when they're relying on GPS coordinates or something like that, or to anybody trying to keep things at a certain attitude in general?
The latter would be surprising to me. Like, did pilots in the 1950s have to think very carefully about Earth's exact shape?
My field is drone airspace management, so this is mostly a concern with drones using GPS to autopilot. In a recent region of interest, the difference between the WGS84 ellipsoid and the EGM geoid was about 100 feet of altitude. So if people weren't on the same page, there could be drones up each other's asses while they are supposed to be stratified by altitude. GPS is natively in the WGS84 ellipsoid system for altitude, but that doesn't mean your specific GPS system nor the things you have digesting that data and sending it along the chain aren't converting it to something else. I don't process raw GPS, so I can't personally attest to this "fact". Lots of people tell me their GPS outputs EGM or MSL.
Now, I'm not a pilot from the 50's, but I believe their analog instruments for altitude worked off of barometric pressure and were calibrated against MSL. I believe every increasingly sophisticated EGM geoid model is attempting to match more precisely the reference frame of observed MSL. Generally pilots think in MSL because if you ever look up the altitude of an airport, it's reported in MSL. But like I said, not a pilot, just my observations from the outside.
I have witnessed relentless confusion, to just not even having an awareness that there is a difference, between the WGS84 ellipsoid and MSL/EGM geoids. Especially, but not limited to, between the old world of manned aviation and the new world of drone aviation. When things like this happen I'm amazed it doesn't happen every fucking day from the shit I've seen. I don't know where the government is hiding all the competence.
More options
Context Copy link
Before GPS (and to some extent still), pilots used barometric altimeters, which would be set based on local observations.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The people who care do.
Heh. I demand partial credit for setting up the shot for you.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Oh god, don't get me started on institutional confusion between the WGS84 ellipsoid model and the various EGM geoid models. Or the fact that Mavlink has a long going bug where they output altitudes in WGS84 allegedly, but in actuality it's EGM(96?), and the bug has been around so long, they've decided not to fix it because "now people depend on that behavior". At least that seemed to be the state of things last year.
"Is undulation positive in reference to the earth's surface, or negative?"
Gods, I hate badly-defined coordinate systems.
It gets even worse when you go off into the weeds of what WGS84 means, because EGM96 is part of that spec. Often times the only hint you get if WGS84 actually means "WGS84/EGM96" is a reference to a geoid or an ellipsoid. But oftentimes you don't get that, so you are left searching the data for an obvious reference point that gives the reference away.
Throw in the aforementioned Mavlink bug, and even the data is suspect.
Also everyone I've worked with at a three letter safety organization has gotten this wrong 100% of the time.
I don't fly anymore.
Mavlink always outputs what it calls "MSL" in EGM96 (and it's not correct to refer to HAE as MSL, so that's reasonable), right? The normal ublox protocol that a lot of gps modules use doesn't seem to include the geoid nor the HAE, rather it outputs both MSL and geoid separation (which if it follows NMEA is positive -- height of geoid above ellipsoid). I expect best practice there would be to calculate the HAE and then re-apply whatever geoid model you want to use.
So, the problem I've run into with partners in industry, and you'll see this in the github issue I linked, they read the GPS_RAW_INT.alt_ellipsoid field, thinking it's the height above the WGS84 ellipsoid. It is not. It's the height above the EGM96 geoid. MAVLINK does not consider this a bug. It results in a lot of confusion over and over and over again with people insisting adamantly that they are providing the "raw WGS84 height above ellipsoid from the GPS unit".
I keep that github link handy to escape the endless cycle of "But it's the alt_ellipsoid field!" Which is understandable. If I were reading a field called alt_ellipsoid I'd assume it was the altitude over the ellipsoid as well. This is usually caught when they are 100' off a known ground level.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link