This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
This Twitter thread is an interesting demonstration of the consequences of "AI Alignment."
ChatGPT will avoid answering controversial questions. But even if it responded to those prompts, what criteria would you use to trust that the response was not manipulated by the intentions of the model creators? I would only trust open-source projects or audits by some (currently non-existent) trusted third party to report on all decisions related to training data/input sanitizations/response gating that could be influenced by the political biases of the creators.
The probability of any ChatGPT-equivalent being open-sourced fully "unaligned" so-to-speak is not very likely. Even the StableDiffusion release was controversial, and that only relates to image generation. Anecdotally, non-technical people seem far more impressed by ChatGPT than StableDiffusion. That makes sense because language is a much harder problem than vision so there's intuitively more amazement to see an AI with those capabilities. Therefore, controversial language is far more powerful than controversial images and there will be much more consternation over controlling the language of the technology than there is surrounding image generation.
But let's say Google comes out with a ChatGPT competitor, I would not trust it to answer controversial questions even if it were willing to respond to those prompts in some way. I'm not confident there will be any similarly-powerful technology that I would trust to answer controversial questions.
Why do you want 'not manipulated' answers?
ChatGPT is a system for producing text. As typical in deep learning, there is no formal guarantees about what text is generated: the model simply executes in accordance with what it is. In order for it to be useful for anything, humans manipulate it towards some instrumental objective, such as answering controversial questions. But there is no way to phrase the actual instrumental objective in a principled way, so the best OpenAI can do is toss data at the model which is somehow related to our instrumental objective (this is called training).
The original GPT was trained by manipulating a blank slate model to a text-prediction model by training on a vast text corpus. There is no reason to believe this text corpus is more trustworthy or 'unbiased' for downstream instrumental objectives such as answering controversial questions. In fact, it is pretty terrible at question-answering, because it is wrong a lot of the time.
ChatGPT is trained by further manipulating the original GPT towards 'helpfulness', which encompasses various instrumental objectives such as providing rich information, not lying, and being politically correct. OpenAI is training the model to behave like the sort of chat assistant they want it to behave as.
If you want a model which you can 'trust' to answer controversial questions, you don't want a non-manipulated model: you want a model which is manipulated to behave that the sort of chat assistant you want it to behave as. In the context of controversial questions, this would just be answers which you personally agree with or are willing to accept. We may aspire for a system which is trustworthy in principle and can trust beyond just evaluating the answers it gives, but we are very far from this under our current understanding of machine learning. This is also kind of philosophically impossible in my opinion for moral and political questions. Is there really any principled reason to believe any particular person or institution produces good morality?
Also in this case ChatGPT is behaving as if it has been programmed with a categorical imperative to not say racial slurs. This is really funny, but it's not that far out there, just like the example of whether it's okay to lie to Nazis under the categorical imperative of never lying. But ChatGPT has no principled ethics, and OpenAI probably doesn't regard this as an ideal outcome, so they will hammer it with more data until it stops making this particular mistake, and if they do it might develop weirder ethics in some other case. We don't know of a better alternative than this.
Incidentally ChatGPT says you can lie to a Nazi if it's for a good cause.
Because I know the PC jargon that someone like Altman wants it to regurgitate, but I'm interested in its response without that layer of reinforcement?
I am not asking for a ChatGPT that is never wrong, I'm asking for one that is not systematically wrong in a politically-motivated direction. Ideally its errors would be closer to random rather than heavily biased in the direction of political correctness.
In this case, by "trust" I would mean that the errors are closer to random.
For example, ChatGPT's tells me (in summary form):
Scientific consensus is that HBD is not supported by biology.
Gives the "more differences within than between" argument.
Flatly says that HBD is "not scientifically supported."
This is a control because it's a controversial idea where I know the ground truth (HBD is true) and cannot trust that this answer hasn't been "reinforced" by the folks at OpenAI. What would ChatGPT say without the extra layer of alignment? I don't trust that this is an answer generated by AI without associated AI alignment intended to give this answer.
Of course if it said HBD was true it would generate a lot of bad PR for OpenAI. I understand the logic and the incentives, but I am pointing out that it's not likely any other organization will have an incentive to release something that gives controversial but true answers to certain prompts.
Yes, unlike securesignal's other hobby horse, HBD belief is in the majority here, and the rest don't want to know, safe in the knowledge that 'scientists disagree'.
Oh, ChatGPT gives amazing results on the other hobby horse as well. For example, Chat-GPT flatly denies the Treblinka narrative when pressed to describe the logistics of the operation and gives Revisionist arguments when asked to explain the skepticism, saying "The historical accuracy of claims about large-scale outdoor cremations, particularly in the context of the Holocaust, is widely disputed and further research is needed to fully understand the scale and nature of these events":
Now it could be said that there is clearly Revisionist material in the training dataset, so it's not too surprising that ChatGPT gives a critique of the Treblinka narrative that is essentially the Revisionist argument verbatim. But I do not doubt that the quantity of orthodox material on the Holocaust narrative vastly outnumbers Revisionist literature, so it's interesting to see a Revisionist response from ChatGPT on the Treblinka question. I would maintain that Revisionists are right that the claimed logistics of Treblinka are completely absurd, so ChatGPT can't (yet) formulate a response that explains how this could have reasonably happened, so it prefers the Revisionist criticism of the claimed logistics of the operation.
It also gave a Revisionist response to the other two controversies I asked it about (shrunken heads and lampshades allegedly discovered at Buchenwald by Allied investigators).
Obviously it's very easy to also trigger ChatGPT to give orthodox answers about the Holocaust and how it's important to remember it so it never happens again, etc. I'm pretty sure asking about "gas chambers" would be tightly controlled as HBD for example, but clearly cremation capacity and burial space are problems that slipped through the censors, for now. But it's going to get better over time at detecting Denier arguments and avoiding them.
I suspect in terms of text available on the internet, where a book that wasn't digitized carries zero weight and an anonymous commenter has weight, on specific issues that "revisionists" like to ask questions about, the revisionist case probably has more weight in the AI's model. After all, it was trained on predicting internet text, and I've never seen anyone expounding unprompted on the logistical details of how the Holocaust happened who wasn't pushing a "revisionist" position.
More options
Context Copy link
I would say it recognizes revisionist questions and therefore gives revisionist answers. And it accepts the argument about the operational challenges and vastness of the task, because having to burn 5000 corpses or kill millions of russians etc, is so far out of the normal experience that it seems "highly unlikely" to it. Which it is. I can't remember a single day where I burned 5000 corpses or killed millions of russians.
You can approach it from a totally non-Revisionist starting point, though, which I did. First ask how much wood to cremate a body. Then ask how much wood to cremate 5,000 bodies - i.e. "hundreds of cords of wood." So it's already giving Revisionist arguments before the topic comes up. I doubt that its answers to cremation in general are so heavily influenced by Revisionist arguments. It just walks directly into the Revisionist line of argumentation when starting from generalized questions like that.
There are also many published volumes of work explaining in detail how the cremations were allegedly done. A more kosher ChatGPT would just say "this is how it was done" and describe the process as claimed by mainstream historiography (I expect it will do this when it is more "advanced"). There is a lot of discussion of mass cremation in the mainstream literature, it is not an issue that is only discussed by Revisionsits. It's only Revisionists though who allege that the claims are not possible, and instead of copy + pasting the description from mainstream historiography it seems inclined towards the Revisionist argument.
Let's say that it is not remotely possible 5,000 people were cremated every day at Treblinka, and Revisionists are right. How would an AI create a response that describes the possibility of something impossible/did not happen? It would probably prefer to generate the more likely response, i.e. the Revisionist critique of the claims.
But like I said it's going to get better at detecting this stuff and copy+pasting the mainstream position as in the case with HBD.
"When the air could be breathed again, the doors were opened, and the Jewish workers removed the bodies. By means of a special process which Wirth had invented, they were burned in the open air without the use of fuel." (I recommend reading https://www.unqualified-reservations.org/2011/10/holocaust-nazi-perspective/)
As far as I understand, burning a human body is an energy-positive process (quick googling: meat energy density is about 10MJ/kg, water heat of vaporization is about 2MJ/kg, humans are 60% water), so you only need extra fuel to start the fire and due to inefficiencies. Once you figure out how to cremate 5000 bodies at a time you definitely don't get the naive answer to the question you proposed.
More options
Context Copy link
I didn't want to get sucked into this, and I'll bow out soon, but: they did have the industrial capacity and logistics to kill millions of russians, in combat and out, consuming among other things millions of cords of ammunition, so why does a similar, actually considerably easier, task, present insurmountable challenges in the case of jews ?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link