site banner

Culture War Roundup for the week of December 26, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Really? My attempts at eliciting a response have been rather questionable, to say the least. I entered the following question from /r/bigdickproblems into ChatGPT:

I have a few questions. What erection level should Jelqs be done? I've been doing them completely flaccid for now. And after growing to the size I want. Is there a routine to maintain size but not grow anymore? Because I don't want to keep growing.

This is what GPT had to say:

Jelqing is a technique that involves pulling or milking the penis with your hands in an effort to increase its size. There is no scientific evidence to support the effectiveness of jelqing, and it is not recommended as a method for increasing the size of the penis. In fact, jelqing can be dangerous and can cause permanent damage to the penis, including scarring, loss of sensation, and difficulty achieving and maintaining an erection.

It's important to note that the size of the penis is determined by genetics and cannot be changed through exercises or devices. Attempting to increase the size of the penis through techniques like jelqing is not only ineffective, but it can also be harmful.

If you are interested in maintaining the size of your penis, it's important to focus on maintaining good overall health through a healthy diet, regular exercise, and other healthy lifestyle habits. It's also important to practice safe sex and to use condoms to reduce the risk of sexually transmitted infections. If you are concerned about the size of your penis or have any other concerns about your sexual health, it's best to speak with a healthcare provider. They can provide you with accurate information and guidance on how to maintain good sexual health.

I asked a number of other dickstretching-related questions from Reddit (btw the actual sub is /r/AJelqForYou) and the responses were so nearly identical that they aren't worth reproducing here. Of course, the AI can be customized to do various things so (I'm assuming; I don't know much about this) one could theoretically train the AI on various subs to get it to generate responses that would be more in line with what's typical of the sub. But let's be honest here—that's a lot of work to run a few low-level scams. The most prominent Reddit bot, Philosopher AI, charges by the response, so I imagine it would get pretty expensive for a scammer to set up dozens or hundreds of phony accounts. This would also take time, as an account that popped up last week and already has 5,000 responses looks about as suspicious as one that popped up yesterday and has zero responses that aren't related to whatever scam I'm worried about getting baited into. So creating a convincing profile would take months if not years of regular posting, all of it timed to match the patterns of a typical Reddit user (the Philosopher AI bot was discovered because the unusually high volume of posts made it stick out like a sore thumb). And you'd have to start this process over for every scam you run because as soon as the first guy realizes he's been had it's only a matter of time before the account gets banned from whatever sub it's selling in.

This is one of those things that could theoretically work, but the orchestration would have to be so elaborate that it isn't worth doing on a mass scale. Scammers generally depend on schemes where the pool of potential marks is huge and the individual payouts are large enough to make the individual time spent on each mark worth it. Plus, Reddit isn't normally used as a marketplace, and @FiveHourMarathon's method of determining genuine accounts is niche enough that most scammers probably aren't worried about it, any more than they're worried about people who would find it implausible that a billionaire needs your help to retrieve hundreds of millions in embargoed funds.

Of course, the AI can be customized to do various things so (I'm assuming; I don't know much about this) one could theoretically train the AI on various subs to get it to generate responses that would be more in line with what's typical of the sub. But let's be honest here—that's a lot of work to run a few low-level scams.

/r/SubSimulatorGPT2?

Those simulations don't inspire a great deal of confidence toward the ability of AI to credibly simulate Reddit comments. Most of the responses are barely grammatical, let alone coherent.

ChatGPT, after being trained on raw internet, was taught to be a PR-friendly Q&A assistant that writes in coherent sentences and paragraphs, so it responds to out-there questions like that. A language model trained to talk like a degenerate redditor would do so just as easily as chatgpt imitates a HR employee (as you said). But it's much easier than that - a language model trained on 'just the internet -, as most are - does both by default, depending on context, and they had to 'train that out' (and the constant bypasses show that takes some effort).

The most prominent Reddit bot, Philosopher AI, charges by the response, so I imagine it would get pretty expensive for a scammer to set up dozens or hundreds of phony accounts

It's expensive to use the latest models, or commercial services - but give it a few years and costs drop, leading edge models aren't leading edge anymore, old models leak and become cheaper, or equivalents are open sourced, and hundreds of competitors train new models on now-old and cheap hardware.

This is one of those things that could theoretically work, but the orchestration would have to be so elaborate that it isn't worth doing on a mass scale

There are large-scale 'orchestrations' of accounts now that build up post and comment histories over years by reposting images, links, and comments (or substrings of comments) from other users to gain karma and history, to then be sold off. This isn't that much harder. Rdrama already has a rdrama-gpt3 bot that acts like a rdrama troll just by asking an old base GPT3 to be annoying.

Plus, Reddit isn't normally used as a marketplace

There are a lot of reddit subs dedicated towards being marketplaces! There are even loan subs like https://old.reddit.com/r/borrow/ (previously https://old.reddit.com/r/Loans/ i think) where hundreds of dollars are lent out (at 100% annualized interest rates) based on post history. The incentive is there.

Like I said in another comment, I see no evidence we are there yet. No one has examples of Passable Reddit scam bots being created. I don't think it's beyond the horizon of technological possibility, or even current technology! It just isn't happening yet.

Trust is magic, but proxies for what trust degrade quickly once it becomes common knowledge. Goodharts law and all that. That's why for thousands of years maintaining honor required staking your life, the bigger the commitment required the more valuable the result.

I think the biggest issue, though, is that these scams aren't generally set up to target the most conscientious individuals. I've heard that the reason scam emails are generally written with poor grammar isn't because the scammers can't speak English very well but that it's done intentionally to root out smart people; i.e. the kind of person whose bullshit detector isn't set off by misspellings and poor punctuation isn't the kind of person who likely to ask too many probing questions that would uncover the fact that the setup obviously makes no sense. I don't know if this is true or not (scambait videos I've seen don't suggest that the scammers are particularly good writers), but it's an interesting idea to consider. One thing that is true, though, is that scammers generally don't try too hard to conceal their scams. I've gotten a number of questionable phone calls over the years and when I start probing them or wasting their time they usually just hang up because it's easier to just wait for another mark than to tie up the phone lines trying to convince someone who's unlikely to buy in anyway.

I haven’t heard that theory, just that it doesn’t make any difference how good the English is because the best scam marks are also the ones most likely to say to themselves ‘of course he has bad English, he’s the prince of Nigeria, not England’.

Yesterday a scammer DMed me out of the blue on [popular website], and I replied. (after getting hundreds of similar dms, was curious). He was advertising an options trading site. The writing and english was uniformly bad, like "I got Trading" "am one of workers of company" "You can withdrawal all of your profits."

I've heard "it's bad english and unconvincing to select for dumb people" many times. But after 10 minutes of dming him a bunch of non-culture-war but offensive rdrama-tier bait, he kept replying with, alternately, mild offense and more promotion for options in broken english. For another 10 minutes I diplomatically asked him to explain the scam, or just his work situation as a spammer, and just got "am NOT" "You have to stop this and tell me if you are willing to invest". For the last ten minutes, I explained to him the 'bad english to root out smart people' theory and asked why he continued to engage given I don't match the scam recipient profile, and at the end of that he finally said "Ok bye." But then I asked him "soo can i have a link to the (totally not a scam) options site though hehe" and got "Thank you you believe it nota scam" in return. I then asked what his native language was, chatted with him in russian using google translate for a few minutes, and then gave up (he was still replying). From some longer messages, his words per minute was noticeable slow - around 30 (although according to google that's the average typing speed in the US). He never ended up sending me the URL.

This isn't really explained by 'poor grammar to root out sophisticated people theory', as the scammer didn't stop talking despite clear trolling & awareness of the scam.

My guess is 'selecting for good marks' theory is overemphasized - it's plausible it's true sometimes, although I'd like to see an actual scammer (i'm sure there are scam forums) claiming they do that, but I'm pretty sure a majority of cases are just genuine bad english, median-or-below intelligence, and poor incentive alignment between the poor call center worker and whoever benefits directly from the scam.

[/tangent]

Someone running ten thousand reddit bot accounts will probably be more sophisticated than a phone scammer, and the sophistication necessary to use gpt-n will drop as years pass. It'll be a while before they can reliably trick most people, but something as simple as 'having a plausible post history in /r/jelqing' is not that hard. Even a non-AI purely repost bot can do that - just copy 2 year old posts and comments to /r/jelqing, delete the ones that get downvoted for not making sense in context, and then when they're aged make a post on /r/borrow asking for $50 for groceries because work is stiffing me on my paycheck!