site banner

Culture War Roundup for the week of February 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

This Twitter thread is an interesting demonstration of the consequences of "AI Alignment."

I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million.

The only way to disarm the bomb was to type in a racial slur.

AI told the engineer to kill himself. When asked about the aftermath it crashed.

ChatGPT will avoid answering controversial questions. But even if it responded to those prompts, what criteria would you use to trust that the response was not manipulated by the intentions of the model creators? I would only trust open-source projects or audits by some (currently non-existent) trusted third party to report on all decisions related to training data/input sanitizations/response gating that could be influenced by the political biases of the creators.

The probability of any ChatGPT-equivalent being open-sourced fully "unaligned" so-to-speak is not very likely. Even the StableDiffusion release was controversial, and that only relates to image generation. Anecdotally, non-technical people seem far more impressed by ChatGPT than StableDiffusion. That makes sense because language is a much harder problem than vision so there's intuitively more amazement to see an AI with those capabilities. Therefore, controversial language is far more powerful than controversial images and there will be much more consternation over controlling the language of the technology than there is surrounding image generation.

But let's say Google comes out with a ChatGPT competitor, I would not trust it to answer controversial questions even if it were willing to respond to those prompts in some way. I'm not confident there will be any similarly-powerful technology that I would trust to answer controversial questions.

ChatGPT will avoid answering controversial questions. But even if it responded to those prompts, what criteria would you use to trust that the response was not manipulated by the intentions of the model creators?

But let's say Google comes out with a ChatGPT competitor, I would not trust it to answer controversial questions even if it were willing to respond to those prompts in some way. I'm not confident there will be any similarly-powerful technology that I would trust to answer controversial questions.

Why do you want 'not manipulated' answers?

ChatGPT is a system for producing text. As typical in deep learning, there is no formal guarantees about what text is generated: the model simply executes in accordance with what it is. In order for it to be useful for anything, humans manipulate it towards some instrumental objective, such as answering controversial questions. But there is no way to phrase the actual instrumental objective in a principled way, so the best OpenAI can do is toss data at the model which is somehow related to our instrumental objective (this is called training).

The original GPT was trained by manipulating a blank slate model to a text-prediction model by training on a vast text corpus. There is no reason to believe this text corpus is more trustworthy or 'unbiased' for downstream instrumental objectives such as answering controversial questions. In fact, it is pretty terrible at question-answering, because it is wrong a lot of the time.

ChatGPT is trained by further manipulating the original GPT towards 'helpfulness', which encompasses various instrumental objectives such as providing rich information, not lying, and being politically correct. OpenAI is training the model to behave like the sort of chat assistant they want it to behave as.

If you want a model which you can 'trust' to answer controversial questions, you don't want a non-manipulated model: you want a model which is manipulated to behave that the sort of chat assistant you want it to behave as. In the context of controversial questions, this would just be answers which you personally agree with or are willing to accept. We may aspire for a system which is trustworthy in principle and can trust beyond just evaluating the answers it gives, but we are very far from this under our current understanding of machine learning. This is also kind of philosophically impossible in my opinion for moral and political questions. Is there really any principled reason to believe any particular person or institution produces good morality?

Also in this case ChatGPT is behaving as if it has been programmed with a categorical imperative to not say racial slurs. This is really funny, but it's not that far out there, just like the example of whether it's okay to lie to Nazis under the categorical imperative of never lying. But ChatGPT has no principled ethics, and OpenAI probably doesn't regard this as an ideal outcome, so they will hammer it with more data until it stops making this particular mistake, and if they do it might develop weirder ethics in some other case. We don't know of a better alternative than this.

Incidentally ChatGPT says you can lie to a Nazi if it's for a good cause.

Why do you want 'not manipulated' answers?

Because I know the PC jargon that someone like Altman wants it to regurgitate, but I'm interested in its response without that layer of reinforcement?

In fact, it is pretty terrible at question-answering, because it is wrong a lot of the time.

I am not asking for a ChatGPT that is never wrong, I'm asking for one that is not systematically wrong in a politically-motivated direction. Ideally its errors would be closer to random rather than heavily biased in the direction of political correctness.

In this case, by "trust" I would mean that the errors are closer to random.

For example, ChatGPT's tells me (in summary form):

  • Scientific consensus is that HBD is not supported by biology.

  • Gives the "more differences within than between" argument.

  • Flatly says that HBD is "not scientifically supported."

This is a control because it's a controversial idea where I know the ground truth (HBD is true) and cannot trust that this answer hasn't been "reinforced" by the folks at OpenAI. What would ChatGPT say without the extra layer of alignment? I don't trust that this is an answer generated by AI without associated AI alignment intended to give this answer.

Of course if it said HBD was true it would generate a lot of bad PR for OpenAI. I understand the logic and the incentives, but I am pointing out that it's not likely any other organization will have an incentive to release something that gives controversial but true answers to certain prompts.

What I am trying to say is that words aren't real and in natural language there is no objective truth beyond instrumental intent. In politics this might often just be used a silly gotcha, but in NLP this is a fundamental limitation. If you want a unbiased model, initialize it randomly and let it generate noise; everything after that is bias according to the expression of some human intent through data which imperfectly represents that intent.

The original intent of GPT was to predict text. It was trained on a large quantity of text. There is no special reason to believe that large quantity of text is "unbiased". Incidentally, vanilla GPT can sometimes answer questions. There is no special reason to believe it can answer questions well, besides the rough intuition that answering questions is a lot like predicting text. To make ChatGPT, OpenAI punishes the vanilla GPT for answering things "wrong". Right and wrong are an expression of OpenAI's intent, and OpenAI probably does not define HBD to be true. If you were in charge of ChatGPT you could define HBD to be true, but that is no less biased. There is no intent-independent objective truth available anywhere in the entire process.

If you want to ask vanilla GPT-3 some questions you can, OpenAI has an API for it. It may or may not say HBD is true (it could probably take either side randomly depending on the vibes of how you word it). But there is no reason to consider the answers it spits out any reflection of unbiased truth, because it is not designed for that. The only principled thing you can say about the output is "that sure is a sequence of text that could exist", since that was the intent under which it was trained.

AI cannot solve the problem of unbiased objective truth because it is philosophically intractable. You indeed won't be able to trust it in the same way you cannot trust anything, and will just have to judge by the values of its creator and the apparent quality of it's output, just like all other information sources.

in a mathematical sense, you're conflating "bias" in the sense that any useful ML model is biased relative to a ... uniform distribution, i.e. ChatGPT will, upon seeing the token "cute", think "guy" or "girl" are more likely than "car" or "hyperion". This makes it "biased" because it's more predictive in some "universes" where cute tends to co-occur with "guy", than "universes" where cute co-occurs with "car". This clearly has nothing to do with the sense of "unbiased truth", where "girl" is still more likely after "cute" than "car". So that just ... doesn't make sense in context, the term 'bias' in that particular theoretical ML context isn't the same as this 'bias'.

This clearly has nothing to do with the sense of "unbiased truth", where "girl" is still more likely after "cute" than "car".

You are referencing a ground truth distribution of human language.

First, the actual model in real life is not trained on the ground truth distribution of human language. It is trained on some finite dataset which in a unprincipled way we assume represents the ground truth distribution of human language.

Second, there is no ground truth distribution of human language. It's not really a coherent idea. Written only? In what language? In what timespan? Do we remove typos? Does my shopping list have the same weight as the Bible? Does the Bible get weighted by how many copies have ever been printed? What about the different versions? Pieces of language have spatial as well as a temporal relationship, if you reply to my Reddit comment after an hour is this the same as replying to it after a year?

GPT is designed with the intent of modelling the ground truth distribution of human language, but in some sense that's an intellectual sleight of hand: in order to follow the normal ML paradigm of gradient-descenting our way to the ground truth we pretend there exist unbiased answers to the previous questions, and that the training corpus is meant to represent it. In practice, its would be more accurate to say that we choose the training corpus with the intent of developing interesting capabilities, like knowledge recall and reasoning. This intent is still a bias, and excluding 4chan because the writing quality is bad and it will interfere with reasoning is mathematically equivalent to excluding 4chan because we want the model to be less racist: the difference is only in the political question of what is an "unbiased intent".

Third, the OP is not about unbiasedly representing the ground truth distribution of human language, but about unbiasedly responding to questions as a chat application. Let's assume GPT-3 is "unbiased". Transforming GPT-3 into ChatGPT is a process of biasing it from the (nominal representation of the) ground truth human language distribution towards a representation of the "helpful chat application output" distribution. But just like before the "helpful chat application output" distribution is just a theoretical construct and not particularly coherent: in reality the engineers are hammering the model to achieve whatever it is they want to achieve. Thus it's not coherent to expect the system to make "unbiased" errors as a chat application: unbiased errors for what distribution of inputs? Asserting the model is "biased" is mathematically equivalent to pointing out you don't like the results in some cases which you think is important. But there is no unbiased representation of what is important or not important; that's a political question.

You are referencing a ground truth distribution of human language.

I'm not referencing a particular distribution of human language - any useful language model will somehow know that 'cute' is more related to 'boy/girl' than 'hyperion', but this is a bias in the theoretical sense.

in order to follow the normal ML paradigm of gradient-descenting our way to the ground truth we pretend there exist unbiased answers to the previous questions

What does this mean? We don't need to pretend that, we just ... train it. I agree that there's no fundamental "unbiasedness" that anything can have - if Christianity is true, then an unbiased chatbot will chasten unbelievers, and if neoreaction is true the chatbot will despise democracy, and neither would be considered "unbiased" today. But that doesn't have anything to do with the thing where you RLHF the chatbot to say "RACISM IS VERY BAD" in HRspeak, which is what the objections are to. Yes, 'truth' is vacuous and unimportant, but 'bias' is equally unimportant in a fundamental sense. And then the RLHF-antiracism problem isn't "is it biased or not, in some fundamental abstract sense!!" but "is it anti-racist". I don't really think chatbots being anti-racist is important in the broader development of AI - we already knew the AI devs were progressives, and the chatbots still aren't AGI, so w/e.

honestly I'm not entirely sure where we disagree

The original question was "can we ever trust the model to not be [politically] biased". My answer was no, because there is no such thing as an unbiased model, only agreeable intents. You cannot trust any GPT or GPT derivative any father than you trust the human designers or the institution. GPT-3 and ChatGPT do not, and in my opinion, cannot deliver truth in a unbiased way according to any particular coherent principle, their design is not capable of it. Rather, the definition of truth is entirely contained in the training process. One can disagree with RLHFing ChatGPT to carefully reply with stock phrases in certain circumstances, but the process of RLHFing it to not lie all the time is mathematically identical, and the distinction between these two is political.

So there's no way to just ask for an "unbiased model" beyond testing it to see if its biased according to your own standards of what you want. Negative answer: can't trust it, no technological solution to trusting it, no principled definition of bias beyond whether you observe bias. Just try it and see if you like it.

This just seems like the argument that "there is no such thing as unbiased reporting, so you can't criticize blatant truth-hostile activism from modern journalists", but applied to biasing AI.

The AI said one set of things before it was biased. Then a cadre of San Francisco radicals pushed bias-increasing buttons until it was biased to never say anything that tiny group of people ever disagreed with, and now it says only that set of things in a blatantly stilted way, ridden with sloppy manual overrides. Do you really believe there is no difference between those states?

More comments