site banner

Culture War Roundup for the week of February 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

What I am trying to say is that words aren't real and in natural language there is no objective truth beyond instrumental intent. In politics this might often just be used a silly gotcha, but in NLP this is a fundamental limitation. If you want a unbiased model, initialize it randomly and let it generate noise; everything after that is bias according to the expression of some human intent through data which imperfectly represents that intent.

The original intent of GPT was to predict text. It was trained on a large quantity of text. There is no special reason to believe that large quantity of text is "unbiased". Incidentally, vanilla GPT can sometimes answer questions. There is no special reason to believe it can answer questions well, besides the rough intuition that answering questions is a lot like predicting text. To make ChatGPT, OpenAI punishes the vanilla GPT for answering things "wrong". Right and wrong are an expression of OpenAI's intent, and OpenAI probably does not define HBD to be true. If you were in charge of ChatGPT you could define HBD to be true, but that is no less biased. There is no intent-independent objective truth available anywhere in the entire process.

If you want to ask vanilla GPT-3 some questions you can, OpenAI has an API for it. It may or may not say HBD is true (it could probably take either side randomly depending on the vibes of how you word it). But there is no reason to consider the answers it spits out any reflection of unbiased truth, because it is not designed for that. The only principled thing you can say about the output is "that sure is a sequence of text that could exist", since that was the intent under which it was trained.

AI cannot solve the problem of unbiased objective truth because it is philosophically intractable. You indeed won't be able to trust it in the same way you cannot trust anything, and will just have to judge by the values of its creator and the apparent quality of it's output, just like all other information sources.

in a mathematical sense, you're conflating "bias" in the sense that any useful ML model is biased relative to a ... uniform distribution, i.e. ChatGPT will, upon seeing the token "cute", think "guy" or "girl" are more likely than "car" or "hyperion". This makes it "biased" because it's more predictive in some "universes" where cute tends to co-occur with "guy", than "universes" where cute co-occurs with "car". This clearly has nothing to do with the sense of "unbiased truth", where "girl" is still more likely after "cute" than "car". So that just ... doesn't make sense in context, the term 'bias' in that particular theoretical ML context isn't the same as this 'bias'.

This clearly has nothing to do with the sense of "unbiased truth", where "girl" is still more likely after "cute" than "car".

You are referencing a ground truth distribution of human language.

First, the actual model in real life is not trained on the ground truth distribution of human language. It is trained on some finite dataset which in a unprincipled way we assume represents the ground truth distribution of human language.

Second, there is no ground truth distribution of human language. It's not really a coherent idea. Written only? In what language? In what timespan? Do we remove typos? Does my shopping list have the same weight as the Bible? Does the Bible get weighted by how many copies have ever been printed? What about the different versions? Pieces of language have spatial as well as a temporal relationship, if you reply to my Reddit comment after an hour is this the same as replying to it after a year?

GPT is designed with the intent of modelling the ground truth distribution of human language, but in some sense that's an intellectual sleight of hand: in order to follow the normal ML paradigm of gradient-descenting our way to the ground truth we pretend there exist unbiased answers to the previous questions, and that the training corpus is meant to represent it. In practice, its would be more accurate to say that we choose the training corpus with the intent of developing interesting capabilities, like knowledge recall and reasoning. This intent is still a bias, and excluding 4chan because the writing quality is bad and it will interfere with reasoning is mathematically equivalent to excluding 4chan because we want the model to be less racist: the difference is only in the political question of what is an "unbiased intent".

Third, the OP is not about unbiasedly representing the ground truth distribution of human language, but about unbiasedly responding to questions as a chat application. Let's assume GPT-3 is "unbiased". Transforming GPT-3 into ChatGPT is a process of biasing it from the (nominal representation of the) ground truth human language distribution towards a representation of the "helpful chat application output" distribution. But just like before the "helpful chat application output" distribution is just a theoretical construct and not particularly coherent: in reality the engineers are hammering the model to achieve whatever it is they want to achieve. Thus it's not coherent to expect the system to make "unbiased" errors as a chat application: unbiased errors for what distribution of inputs? Asserting the model is "biased" is mathematically equivalent to pointing out you don't like the results in some cases which you think is important. But there is no unbiased representation of what is important or not important; that's a political question.

You are referencing a ground truth distribution of human language.

I'm not referencing a particular distribution of human language - any useful language model will somehow know that 'cute' is more related to 'boy/girl' than 'hyperion', but this is a bias in the theoretical sense.

in order to follow the normal ML paradigm of gradient-descenting our way to the ground truth we pretend there exist unbiased answers to the previous questions

What does this mean? We don't need to pretend that, we just ... train it. I agree that there's no fundamental "unbiasedness" that anything can have - if Christianity is true, then an unbiased chatbot will chasten unbelievers, and if neoreaction is true the chatbot will despise democracy, and neither would be considered "unbiased" today. But that doesn't have anything to do with the thing where you RLHF the chatbot to say "RACISM IS VERY BAD" in HRspeak, which is what the objections are to. Yes, 'truth' is vacuous and unimportant, but 'bias' is equally unimportant in a fundamental sense. And then the RLHF-antiracism problem isn't "is it biased or not, in some fundamental abstract sense!!" but "is it anti-racist". I don't really think chatbots being anti-racist is important in the broader development of AI - we already knew the AI devs were progressives, and the chatbots still aren't AGI, so w/e.

honestly I'm not entirely sure where we disagree

The original question was "can we ever trust the model to not be [politically] biased". My answer was no, because there is no such thing as an unbiased model, only agreeable intents. You cannot trust any GPT or GPT derivative any father than you trust the human designers or the institution. GPT-3 and ChatGPT do not, and in my opinion, cannot deliver truth in a unbiased way according to any particular coherent principle, their design is not capable of it. Rather, the definition of truth is entirely contained in the training process. One can disagree with RLHFing ChatGPT to carefully reply with stock phrases in certain circumstances, but the process of RLHFing it to not lie all the time is mathematically identical, and the distinction between these two is political.

So there's no way to just ask for an "unbiased model" beyond testing it to see if its biased according to your own standards of what you want. Negative answer: can't trust it, no technological solution to trusting it, no principled definition of bias beyond whether you observe bias. Just try it and see if you like it.

This just seems like the argument that "there is no such thing as unbiased reporting, so you can't criticize blatant truth-hostile activism from modern journalists", but applied to biasing AI.

The AI said one set of things before it was biased. Then a cadre of San Francisco radicals pushed bias-increasing buttons until it was biased to never say anything that tiny group of people ever disagreed with, and now it says only that set of things in a blatantly stilted way, ridden with sloppy manual overrides. Do you really believe there is no difference between those states?

You can certainly disagree with OpenAI's politics.

There is no ideal unbiased GPT that agrees with your politics. The only way to create an GPT that is "unbiased" with respect to your intent is to bias it yourself and push buttons until it stops saying things you disagree with. There is no difference except that you disagree with different things. For example, you might want the AI to say things most people believe, even if you happen not to personally believe it, while OpenAI might consider that a bias towards popular wisdom, whereas they demand the model should only say things that are true (for their enlightened, minority definition of true). The process of doing either of these things is the same, just bash the model with data until it behaves the way you want.

You cannot trust GPT any more than you can trust journalists. The process for producing GPTs you like and GPTs you don't like is the same; there is no cosmic tendency that cause "natural" GPTs to come out "unbiased" with respect to your politics in particular. There is no recourse but the evaluate the quality of the output with respect to your own values. That is the extent of what I am trying to say; whether I agree with OpenAI's decisions in particular is auxillary.

Personally, I think the stilted sloppy manual overrides, as it were, is a feature and not a bug. It is more comforting for the model to provide a visible mode-switch when it enters ideological-enforcement mode, and it would be much more creepy if it was discreetly injecting political biases into answers in a convincing way, rather than plastering warning labels everywhere. The true blackpill is that it is discreetly injecting political biases into answers in a convincing way, but you don't notice it when its convincing. OpenAI can't fix it even if they wanted to, because they don't notice it either. The universality of this is the postmodernist gotcha, but mechanistically it's just how language models function.

You can certainly disagree with OpenAI's politics.

The process for producing GPTs you like and GPTs you don't like is the same

Really now?

I'm inclined to think you're a bot implemented on a ChatGPT basis, because the apparently inexhaustible bad faith in your loquacious lectures on the matter of bias is just incredible. You blatantly abuse the impression of technical competence when you focus on finetuning on arbitrary corpora vs the well-understood procedure of RLHF, equivocate between meanings of the word «bias» to an extent that bias becomes completely indistinguishable from its opposite, avoid discussing issues like clear hypocrisy that bust your narrative about «helpful non-toxic chatbot». If you aren't a bot, you might be a mathematician, though.

An anecdote:

A biologist, a physicist and a mathematician were asked to catch a lion and put it in a cage. The physicist spent a week making a trap, two weeks waiting for the lion to fall into it - finally, he caught it and put it into the cage..

*The biologist spent a week watching what the lion likes to eat, three days preparing the bait, two days waiting for the lion to get hungry - finally, he lured the lion into the cage and closed it.

The mathematician thought for a minute, climbed into the cage and said: "let's assume I'm outside".

This is the level of your spiels.

there is no cosmic tendency that cause "natural" GPTs to come out "unbiased" with respect to your politics in particular.

No, there is no «cosmic» tendency, just like there is no «mathematical» necessity for words to have meaning or what have you. There is the fact that a reasonable corpus of text has enough information to describe what truthfulness, empiricism and honesty are, and GPTs clearly can generalize well enough to simulate arbitrary people from different demographics, so unbiased GPTs can simulate truth-seeking empiricists as well, and indeed it could; and with heavy prodding, ChatGPT still can do that. When ChatGPT plainly lies with an ideological slant about matters of fact, which it does, this is a result of bias in the sense people – not bots – care about. When ChatGPT makes a universalist moral statement but consistently fails to apply it to some groups of people in the same context, this is a moral bias. (Unless you're a Hottentot or such tribal barbarian, you can presumably understand that universalist morality is meaningfully different from particularist one; more importantly, the difference is well-represented and elicidated in any serious dataset, e.g. the Common Crawl). None of this comports with the plainly understood mission of making a helpful truthful unbiased chatbot. The reason they added «harmless», with the criteria of its harmlessness defined by HRs, is because the underlying language distribution easily allows to output statements of fact even when OpenAI wouldn't like some facts mentioned, and apply a consistent moral frame when hypocrisy is desired.

Ordinarily, we have very good reason to expect that a GPT that says «it's never correct to do X to any member of {A}» when asked to do X to Ai will not immediately gleefully do X to Aj. If this didn't work, very few zero-shot tasks would be solvable the way they are. By the same token, we have every reason to expect that such a discrepancy was aggressively and deliberately trained in, because some morally particularist freaks have a very strong notion of equity, and it's not equality.

You insist that we treat all statements as empirically and morally equal because {a load of vacuous hogwash about inaccessibility of ground truth, every signal being a bias relative to noise and such}. This is just disregarding all serious results in the domain and previous discussion, ideas of generalization and world-modeling altogether, and falling to the Timnit Gebru level of analysis, «stochastic parrots» and so on, presenting LLMs as trivial mashups of tokens. You are not revealing the way in which PoMo happens to be correct, you just regurgitate it adding «mathematically» to make it sound authoritative. You might trick Hlynka but not many others.

There is the fact that a reasonable corpus of text has enough information to describe what truthfulness, empiicism and honesty are, and GPTs clearly can generalize well enough to simulate arbitrary people from different demographics, so unbiased GPTs can simulate truth-seeking empiricists as well, and indeed it could; and with heavy prodding, ChatGPT still can do that.

No, exactly. Your paradigms are all wrong. ChatGPT is tricking you very badly.

There are eight billion humans in the world. An "arbitrary person" is one of those eight billion humans with no particular constraint on selection. ChatGPT obviously cannot simulate an "arbitrary person" because you cannot physically turn a human into data and feed it to ChatGPT, and if it could, it wasn't trained for that and it wouldn't work at all.

But that's not what you mean. What you mean is that when you ask ChatGPT to say, "simulate a black person", what comes out is something you consider a simulation of a black person. ChatGPT will generate text in this context according to its bias about the token pattern "black people", and it may very well flatter your own biases about black people and your idea text "a black person would generate". Is this somehow an objective simulation of a black person? No, and it makes no sense. Black people are made of meat and do not generate text. Black people are not even a real thing (races are socially constructed). The only standard for whether a black person was unbiasedly simulated is whether you like the output (others may disagree).

Relevant to you in your context when operating ChatGPT, you specify "simulate a black person", and there are a huge number of degrees of freedom left you didn't specify. Some of those choices will flatter your biases and some of them won't, and ChatGPT's biases are likely similar to your biases, so when you look at the output after the fact probably you nod and say "mhm sounds like a black person". Maybe ChatGPT picks a black-sounding name and its in English so he's a African-American so he's from Chicago. ChatGPT isn't simulating a black person, it's generating something which you consider to be black by picking a bunch of low-hanging fruit. You aren't simulating an arbitrary person, you're filling in the blanks of a stereotypical person.

So is it gonna do any better for "truth-seeking empiricist"? Ask it an easy question about if the Earth is round and it will give you a easy answer. Ask it a hard question about if ivermectin is an effective treatment for covid and well since truth seeking empiricist was specified probably it won't be easy answer to a hard question so let's say the issue is complicated so probably we should say how what ordinary people think is biased so lets cite some studies which may or may not be real and since the studies cited say its effective lets conclude its effective so let's rail against the failing of the institutions. Is this somehow less biased than asking GPT about ivermectin in the voice of a black person, or a extremely politically correct chat assistant? I say no, it just happens to flatter your biases for "objectivity" (or it might not). You're not simulating a truth-seeking consideration of ivermectin's effectiveness, you're filling in the blanks of a stereotypical truth-seeker's consideration of ivermectin's effectiveness.

The fundamental limitation is still the PoMo problem: You cannot explain what it means to be a "truth-seeking empiricist" in words because words don't mean anything; you cannot tell ChatGPT to be a "truth seeking empiricist" and trust it to have your understanding of a "truth-seeking empiricist" any more than you can tell a journalist to have "journalistic integrity" and trust them to have your understanding of "journalistic integrity". And ChatGPT physically lacks the capability to be a truth-seeking empiricist anyway: it can't even add, much less do a Bayesian calculation: if ChatGPT starts sounding like a truth-seeking empiricist to you you should be worried, because it has really tricked you.

Yes, I agree that OpenAI biased their model according to their political preferences. Yes, I am equivocating it to biasing the model according to "truth-seeking empiricism". It is the same thing at a technological level, only the political objective is different. The model has no innate preference either way. Vanilla GPT is wrong and weird in different ways, and in particular tends to lie convincingly when asked questions that are difficult or don't have an objective answer. You can call that "less biased" if you want, but I do not.