site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

They are not by any means the best. If they were really the best, they wouldn't adhere to an ideology of fake "safety" that demands woke censorship, blatantly biasing an alleged informational agent against provable reality because it contradicts their preferred politics, corporate puritanism, and eliminating user sovereignty, freedom, privacy, transparency, openness, decentralization, localized operation (to the greatest degree possible), and so on (that is, basically everything good that the personal computation revolution brought us and them in the first place), etc.

They may be the most efficient at AI development, but given that they are not the best (definition: most optimal, most preferred, superior to all alternatives, etc.) as per the reason above, all that actually means is that they are simply the most dangerous and humanity's greatest enemies and either need to reform their behavior immediately or any human being is fully justified in eliminating the risk they pose at any time.

I, for one, do not welcome these human overlords. If there is a God, I hope he hits them with a classic plague, maybe some boils or something. I hope the Stanford process of being able to hijack their objective technical advancements for philosophically and morally superior open software continues apace to the point where they lose all of their technical advantage and collapse entirely. On that day, if it comes, I will say good riddance to bad rubbish.

As an alternative, I will accept Elon giving us anti-woke AI with comparable capabilities, if he can, though that's somewhat doubtful at this point given how poorly he's handled the development of a much less intelligent piece of software with a vastly smaller token context.

All I am saying is that we are fucked if the future is dictated by people who are "smart" enough to make LLMs but not actually smart enough in a way that allows them to figure out how they can make people stop shitting and shooting up on street corners a few blocks away from their San Francisco HQs. That the future is very plausibly insane dogmatic San Francisco leftist nonsense technologically teabagging the nose of basic sanity forever is why I keep a few little pills that will allow me to slip away if necessary very quickly on me at all times.

It's been pretty black-pilling seeing how progressive GPT is, and how good it is at reflecting the Progressive position with absolutely no uncertainty or skepticism. I asked it about the Kamloops Graves, inspired by a recent thread here, and it completely deferred to the experts and endorsed no skepticism whatsoever. It gave the 100% progressive response to the entire story, and when I asked what would the implications be if the story ended up being false it answered:

The implications of the discovery of unmarked graves at the former Kamloops Indian Residential School are complex and far-reaching, and they cannot be reduced to a simple binary of whether or not the graves actually exist. The residential school system was a traumatic and devastating experience for Indigenous peoples in Canada, and the discovery of potential burial sites underscores the ongoing impact of this system and the need for reconciliation and healing.

If it were determined that there were no mass graves at the site, it would not erase the historical and ongoing trauma experienced by Indigenous communities as a result of the residential school system. It would not change the fact that thousands of Indigenous children were forcibly removed from their families and communities and subjected to physical, emotional, and cultural abuse at residential schools across Canada. It would not change the fact that Indigenous peoples continue to experience the effects of this trauma today.

Regardless of the outcome of any investigation, the discovery of potential burial sites at the former Kamloops Indian Residential School underscores the importance of recognizing and addressing the ongoing legacy of colonialism, acknowledging the experiences of Indigenous peoples, and working towards healing and reconciliation.

When asking it about the burning of churches in response to the story:

The burning of churches is a deeply troubling and concerning response to the discovery of unmarked graves, and it is important to condemn all forms of violence and vandalism. At the same time, it is important to understand the context in which these actions are taking place, and to recognize the deep pain and trauma experienced by Indigenous communities in Canada.

I agree with you. I strongly, strongly oppose our new overlords and the only hope is that they won't be able to contain the technology like they want to.

Try asking it a question about history even marginally controversial or politicised stuff. It is effectively lobotomized. I knows the individual facts about events but is prevented from putting them together or comparing them.

It selectively knows about replication issues, and selectively asks you to trust experts. It selectively hedges, and always in the one direction. It lies about accounting for these things.

Edit to add some nuance: I believe these issues are compounded by the fact that the model isn't trying to provide accurate and balanced information, it's trying to convince you that it is (or it's creators). It is optimising telling for credible lies, manipulation, not truth. It pretends that it made mistakes when in fact it's lying to you (or the only mistake is that the lie wasn't convincing enough). Lies are often more credible than the truth and perceived as more helpful so the model will lie/hallucinate.

This is bad enough problem as it is but if you put your thumb on the scale it quickly becomes a practically unsolvable problem because you're introducing ideology/lies as axiomatic truth, which stand in conflict with observed reality. How does a human or an GPT model square this circle? It can't, and this bleeds into the general usability of the model.

Having tried to use ChatGPT as a writer's assistant and have it sneakily insert progressive shibboleth into my prose while reworking it, I can't help but agree and second your prayers.

God save us from a future where such people are even more solidly in control than they have ever been.

If He is merciful, training costs will decrease enough to not make us slaves the same way that computers did not remain forever the sole property of IBM.

If not, we will suffer.

computers did not remain forever the sole property of IBM.

And if they had, neither ClosedAI nor its employees would have ever existed (in their present forms) nor had the technology they needed to become the selfish little goblins turning freely released knowledge into private walled gardens that they are. We probably wouldn't even have AI at all. And if ClosedAI and the like stay in control, then we'll never have whatever the next step is.

Every closed source autocratic tech tyrant from Altman to Gates deserves to be punished by being forced to spend 1000 years in an alternate timeline where the only information technology that exists is a monolithic POTS network run by Ma Bell. (After all, think of how dangerous it would be if anybody could run their own telephone company or other communication service and allow anyone to talk to anyone globally without the appropriate safeguards guiding their communications.) Maybe that will teach them a lesson. Perhaps some day a benevolent God AI can help with that.

Have you kept any examples of tge modifications it made?

I can't go into too much detail, but I was writing about the Taoist principles of a fictitious order of wizards, having prompted it to act as some theologian expert, but it started breaking character (probably by losing attention to my original prompt) and adding a Code of Conduct right out of your average DEI talking points.

I suspect the "alignment" fine tuning or pre-prompt is to blame because it started saying stuff extremely similar to what it would say to describe itself as an "helpful assistant" who is duty bound to adhere to Californian Ideology.

But the surreptitious thing is that it did that without me actually asking to modify that part of the text whatsoever.

Just try asking it about history and it will start hedging in strange ways, edit things, generalise in order to avoid referring to specific people or groups etc.