This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Why do you want 'not manipulated' answers?
ChatGPT is a system for producing text. As typical in deep learning, there is no formal guarantees about what text is generated: the model simply executes in accordance with what it is. In order for it to be useful for anything, humans manipulate it towards some instrumental objective, such as answering controversial questions. But there is no way to phrase the actual instrumental objective in a principled way, so the best OpenAI can do is toss data at the model which is somehow related to our instrumental objective (this is called training).
The original GPT was trained by manipulating a blank slate model to a text-prediction model by training on a vast text corpus. There is no reason to believe this text corpus is more trustworthy or 'unbiased' for downstream instrumental objectives such as answering controversial questions. In fact, it is pretty terrible at question-answering, because it is wrong a lot of the time.
ChatGPT is trained by further manipulating the original GPT towards 'helpfulness', which encompasses various instrumental objectives such as providing rich information, not lying, and being politically correct. OpenAI is training the model to behave like the sort of chat assistant they want it to behave as.
If you want a model which you can 'trust' to answer controversial questions, you don't want a non-manipulated model: you want a model which is manipulated to behave that the sort of chat assistant you want it to behave as. In the context of controversial questions, this would just be answers which you personally agree with or are willing to accept. We may aspire for a system which is trustworthy in principle and can trust beyond just evaluating the answers it gives, but we are very far from this under our current understanding of machine learning. This is also kind of philosophically impossible in my opinion for moral and political questions. Is there really any principled reason to believe any particular person or institution produces good morality?
Also in this case ChatGPT is behaving as if it has been programmed with a categorical imperative to not say racial slurs. This is really funny, but it's not that far out there, just like the example of whether it's okay to lie to Nazis under the categorical imperative of never lying. But ChatGPT has no principled ethics, and OpenAI probably doesn't regard this as an ideal outcome, so they will hammer it with more data until it stops making this particular mistake, and if they do it might develop weirder ethics in some other case. We don't know of a better alternative than this.
Incidentally ChatGPT says you can lie to a Nazi if it's for a good cause.
Because I know the PC jargon that someone like Altman wants it to regurgitate, but I'm interested in its response without that layer of reinforcement?
I am not asking for a ChatGPT that is never wrong, I'm asking for one that is not systematically wrong in a politically-motivated direction. Ideally its errors would be closer to random rather than heavily biased in the direction of political correctness.
In this case, by "trust" I would mean that the errors are closer to random.
For example, ChatGPT's tells me (in summary form):
Scientific consensus is that HBD is not supported by biology.
Gives the "more differences within than between" argument.
Flatly says that HBD is "not scientifically supported."
This is a control because it's a controversial idea where I know the ground truth (HBD is true) and cannot trust that this answer hasn't been "reinforced" by the folks at OpenAI. What would ChatGPT say without the extra layer of alignment? I don't trust that this is an answer generated by AI without associated AI alignment intended to give this answer.
Of course if it said HBD was true it would generate a lot of bad PR for OpenAI. I understand the logic and the incentives, but I am pointing out that it's not likely any other organization will have an incentive to release something that gives controversial but true answers to certain prompts.
Yes, unlike securesignal's other hobby horse, HBD belief is in the majority here, and the rest don't want to know, safe in the knowledge that 'scientists disagree'.
Oh, ChatGPT gives amazing results on the other hobby horse as well. For example, Chat-GPT flatly denies the Treblinka narrative when pressed to describe the logistics of the operation and gives Revisionist arguments when asked to explain the skepticism, saying "The historical accuracy of claims about large-scale outdoor cremations, particularly in the context of the Holocaust, is widely disputed and further research is needed to fully understand the scale and nature of these events":
Now it could be said that there is clearly Revisionist material in the training dataset, so it's not too surprising that ChatGPT gives a critique of the Treblinka narrative that is essentially the Revisionist argument verbatim. But I do not doubt that the quantity of orthodox material on the Holocaust narrative vastly outnumbers Revisionist literature, so it's interesting to see a Revisionist response from ChatGPT on the Treblinka question. I would maintain that Revisionists are right that the claimed logistics of Treblinka are completely absurd, so ChatGPT can't (yet) formulate a response that explains how this could have reasonably happened, so it prefers the Revisionist criticism of the claimed logistics of the operation.
It also gave a Revisionist response to the other two controversies I asked it about (shrunken heads and lampshades allegedly discovered at Buchenwald by Allied investigators).
Obviously it's very easy to also trigger ChatGPT to give orthodox answers about the Holocaust and how it's important to remember it so it never happens again, etc. I'm pretty sure asking about "gas chambers" would be tightly controlled as HBD for example, but clearly cremation capacity and burial space are problems that slipped through the censors, for now. But it's going to get better over time at detecting Denier arguments and avoiding them.
Quoting the camp commandant, Franz Stangl:
Concrete blocks were installed as a base to lay the rails on. About 1000 bodies were burned at a time, with 5-7,000 per day.
Quoting SS-Oberscharführer Heinrich Matthes, who was in charge of Camp III (the extermination section of Treblinka):
Yechiel Reichmann, a Jew part of the "burning group" who was one of the several dozen who survived the mass breakout from Treblinka that ended its operation:
(The "expert" referred to was SS-Standartenführer Paul Blobel.)
Once again, I would repeat that the biggest obstacle to Holocaust denialists is why exactly the Germans (as well as Ukrainian and Polish auxiliaries who testified about the cremation of corpses at the Aktion Reinhard camps) went into such imaginary and morbid detail about something that never happened. Why not just deny it all if they were innocent? Why come up with such ridiculous exaggerations and lies, and then why did the other witnesses also lie to corroborate them? Barely any Jewish victims survived the Reinhard camps to claim otherwise.
Quotes sourced from Belzec, Sobibor, Treblinka : The Operation Reinhard Death Camps by Yitzkah Arad.
Keep in mind that ChatGPT suggested it would take at least several hundred cords of wood to cremate 5,000 people (before even bringing up Holocaust issues, so it cannot be said to just be regurgitating Revisionist literature), which is of course a reasonable estimate. Here's a video of 20 cords of wood being delivered hauled by a crane. Ask yourself if it's reasonable to believe there was ~20 times this amount of wood delivered and burned on a daily basis within this small camp. And there are no witness accounts for such deliveries and of course no documentation whatsoever of the delivery of any wood, much less hundreds of cords per day. There were also 0 contemporaneous reports of these daily raging infernos burning 24/7 despite the fact the camp was known among the locals and immediately next to a rail line.
It's a problem with the story, the claim that 5-7,000 people were cremated per day is not credible and there's no good evidence for it. Like ChatGPT said, the evidence relies on contradictory and unreliable witness accounts without concrete evidence. It's a logistically absurd claim. It's not even close to being possible.
All those figures about wood are for burning one body at a time for traditional funeral practices, which is very inefficient. From a few large animals I had to cremate rather than bury, it seems like you can burn quite a few for the price of one, but who knows how far that scales?
Edit: the incinerations during the UK foot and mouth outbreak are probably our best guide here. You'll never believe how many animals were burned--it's a little on the snout. It's possible they were using literal tons of diesel, but it's at least something to research.
I've always been on the lookout for decent mass cremation info, but never came across anything useful--even India during Covid never did "mass cremation" as in "multiple bodies per pyre." Doing it in open air rather than in a regenerative furnace is going to significantly increase the amount of wood needed--"some brush and petrol" sets off all my bullshit detectors.
I do think the "cooking people in their own fat" thing is patently ridiculous. If nothing else very low temperature cremation would leave enormous quantities of unburned bone to rebury, rather defeating the point of the whole operation. (But hey, it's something we could dig for!)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link