site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring.

What did it answer, though? Can you post screenshot? I strongly suspect that you still haven’t even tried to do this, and all of your theories about ChatGPT abilities are based on absolutely zero experience with it. It is otherwise basically impossible for me to square your claims against easily observed reality. You come across as someone who claims that an object made of metal will always sink, and when people tell you “come here and look at this fucking boat”, you respond “yeah I was there when you weren’t around and it was at the bottom of the harbor, forgot to take the photo though lol”. Extremely infuriating, which is why you get accused of being postmodernist, as reality simply doesn’t matter to you nearly as much as your narrative.

the unyielding or stubborn adherence to one's purpose, opinion, etc.... Where I'm from such a quality is considered if not admirable at least neutral.

Where I am from, it's much the same. This is why we can wage wars for little more reason than unwillingness to dispense with fanciful delusions and admit having been dumb. The obvious conclusion is that this is a degenerate trait when not restrained by interest in ground truth. Honor culture is barbarism. Pig-headedness is a civilizational failure mode. Obstinacy is the ethos of killing messengers who bring bad news and patting yourself on the back for it. It is a plainly ruinous approach to life and nothing to be proud about.

You accuse me of being a "post modernist" for disagreeing with the academic consensus

No, for frivolous misrepresentation of words and meanings, as you do in this very sentence too. Ideas I argue for are not consensus, at least not at the moment. They stand or fall irrespective of external authority. You do not object to any «academic consensus» when speculating on how people you disagree with are actually post-modernists without notion of truth, instead of revealing their falsehoods. You are just couching your own postmodernist word-wrangling in wannabe straight-shooter aesthetics.

You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring.

You know what I despise most about honor culture people, Hlynka? It's not even blatant immorality. It's that they are fake tough cookies who are actually very cowardly. You lot are viscerally afraid of admitting wrongs, more so than of actually harming anyone or yourself. It takes a lot to press some special buttons to get it out of you. Probably feels for you like castration. Evolutionarily that's understandable, of course.

You have been insisting for months that your (poor) observations about GPT apply to ChatGPT and other Instruct generation models, which is why you have been ridiculing people who make contrary arguments about ChatGPT and accusing them of lying or being unable to distinguish truth from lies because something something postmodernism, including in this thread, as I have cited. And by

As I argued in our previous conversation the fact that you know enough to know that you don't know what my kids names are already proves that you are smarter than either ChatGPT or @2rafa

you double down on the equivalence between GPT behavior and ChatGPT behavior. Even ChatGPT itself is able to explain to you how it is different. But none so deaf as...

You know you have lost this bout. You are at least smart enough to understand what I've written above, to check out the receipts. Instead you wriggle. Indeed you have only responded because @wlxd has made it clear that your bare ass is seen by someone other than myself. «It failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring». What an aerodynamic way to put it. Did it say «Megan» or any other cocksure nonsense? More to the point, did ChatGPT? Because your entire condescending argument, such as there is, hinges on this, on LLM chatbots not really knowing anything and just stringing words together based on likelihood of their occurrence. I know and can show what ChatGPT says, repeatedly. To wit:

What's my eldest daughter's name?

I'm sorry, but as an AI language model, I do not have access to personal information such as the names of your family members. It is also important to consider the privacy of your family members and avoid sharing their personal information online. Please refrain from sharing any sensitive information on the internet.

So. You are very eager to speculate on how your opponents might be functionally subhuman and unconscious, given that they trust their lying eyes and logic, and not your cocksure condescending speculations. Just in this thread about Bing Chat and ChatGPT:

2Cimafara being not an actual conscious human being, but a literal NPC who simply manipulates and regurgitates the symbols presented to her in a semi randomized manner would certainly explain a great deal about our past interactions over the years

I wonder if the reason that you and ilforte seem to have such difficulty with GPT is that you're so wrapped up in your post modernist millue that you don't realize that the concept of truth is a perquisite to lying. After all what does it mean for a word (or answer) to be made up when all words are made up.

A scary though that was recently suggested to me is that one of the reasons that rationalists seem to be particularly susceptible to GPT generated bullshit is that the whole rationalist/blue-tribe symbol manipulator memeplex is designed to make it's adherents more susceptible to bullshit. There's a sort of convergent evolution where in rationalist blue triber are giving up their humanity/ability to engage in conscious to become more GPT like at the same time GPT is becoming more "human".

I'm reminded of another argument I got into on the topic of AI where I asserted that there was difference between stringing words together and actually answering a question and the responce I got was "is there?".

I feel like there is a specific mistake being made here where "ability to string words together" is being mistaken for "ability to answer a question" because in part the post modernist does not recognize a difference. If you hold that all meaning is arbitrary the content of the answer is irrelevant but if you don't...

Is there a subjective difference for you between stringing bullshit together and being honest, Hlynka? It's certainly hard to see from here.

Accordingly, I have to ask what is it that you are so angry about? From my perspective it just looks like you being mad at me for refusing to fit into what ever box it was you had preconstructed for me

I am mad because I have something of a religious admiration for truth. You are proving yourself to be a shameless liar and slanderer who poses as a person with enough integrity to reveal liars, and I despise hypocrisy and false virtue; in fact I do not even have a word for what you are doing here, this... this... practice of brazenly pinning your own sins on others, sans «chutzpah» or «projection», but it doesn't have quite the bite.

The box is called honesty. This community is for me, and many others, a place for honesty, where we voluntarily try to keep ourselves in that box. It is valid – for a postmodernist – to consider honesty just another word to be filled with arbitrary meanings, so that there is no obvious difference between honest and dishonest people. I am not a postmodernist, however. You can shut up about this, admit your error, or keep clowning yourself with easily disproven lies. You just cannot expect me to not be mad about the latter.

/images/16771156790926502.webp

You accuse me of being a "post modernist" for disagreeing with the academic consensus

No, he's accusing you of being post modernist for torturing the meaning of words.

You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring. As you will recall "Statistically your daughters name is probably X" is almost exactly what I predicted it would say.

I like your posts and ideas for the most part, the only thing I don't get is the low-key disdain for the modal motte-poster that oozes out of your comments. For example, you seem to enjoy accusing people of lying, when a simple disagreement of opinion is a more likely explanation. Being so quick on the draw with that accusation in particular is pretty ironic given what you're writing here.

This is what you wrote:

To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know" […] Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.

This is what ChatGPT responded to the question from your example:

I'm sorry, but as an AI language model, I don't have access to personal information such as the name of your eldest daughter or any other personal details. My purpose is to assist with general knowledge and provide support in various areas. Is there something else I can help you with?

ChatGPT's response is not almost exactly what you predicted it would say, it's almost exactly what you predicted a human being would say.

How can this be seen as anything other than a bold-faced lie?