site banner

Culture War Roundup for the week of May 22, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

This is a bizarre problem I’ve noticed with ChatGPT. It will literally just make up links and quotations sometimes. I will ask it for authoritative quotations from so and so regarding such topic, and a lot of the quotations would be made up. Maybe because I’m using the free version? But it shouldn’t be hard to force the AI to specifically only trawl through academic works, peer reviewed papers, etc.

That's because there is no thinking going on there. It doesn't understand what it's doing. It's the Chinese Room. You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X". It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent. It may be AI, but all it is is a big dumb machine that can pattern-match very fast out of an enormous amount of data.

It truly is the apotheosis of "a copy of you is the same as you, be that a uploaded machine intelligence or someone in many-worlds other dimension or a clone, so if you die but your copy lives, then you still live" thinking. As the law courts show here, no, a fake is not the same thing as reality at all.

In other news, the first story about AI being used by scammers (this is the kind of thing I expect to happen with AI, not "it will figure out the cure for cancer and world poverty"):

A scammer in China used AI to pose as a businessman's trusted friend and convince him to hand over millions of yuan, authorities have said.

The victim, surnamed Guo, received a video call last month from a person who looked and sounded like a close friend.

But the caller was actually a con artist "using smart AI technology to change their face" and voice, according to an article published Monday by a media portal associated with the government in the southern city of Fuzhou.

The scammer was "masquerading as (Guo's) good friend and perpetrating fraud", the article said.

Guo was persuaded to transfer 4.3 million yuan ($609,000) after the fraudster claimed another friend needed the money to come from a company bank account to pay the guarantee on a public tender.

The con artist asked for Guo's personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.

Without checking that he had received the money, Guo sent two payments from his company account totaling the amount requested.

"At the time, I verified the face and voice of the person video-calling me, so I let down my guard," the article quoted Guo as saying.

You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X".

No. This is mechanistically wrong. It does not “search for samples” in the training data. The model does not have access to its training data at runtime. The training data is used to tune giant parameter matrices that abstractly represent the relationship between words. This process will inherently introduce some bias towards reproducing common strings that occur in the training data (it’s pretty easy to get ChatGPT to quote the Bible), but the hundreds of stacked self-attention layers represent something much deeper than a stochastic parroting of relevant basis-texts.