site banner

Culture War Roundup for the week of December 26, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

The Dickstretcher Theory of Online Credibility: A Turing test for the Social Media Age

One bullet point on my little Reddit-Ghislaine-Epstein conspiracy theory post that drew a lot of laughter was my story of buying an expensive vintage watch on Reddit, from a user I gave a lot of credibility because he posted in strange and obscure subreddits on the same account, including a subreddit for hobbyists in stretching one’s penis to restore a circumcised foreskin or to attempt to extend length. Obviously dickstretching does not coincide with high trustworthiness or reliability, nor does it particularly coincide with expertise in watches. But it’s simply so strange a thing that it passes the Turing test.

The big pile of comments on a random, obscure hobby subreddit is the text equivalent of reCaptcha tests that just require a click. The process is simple, it wouldn’t be hard for a scammer to comment on weird subreddits, or to program a bot to do it, but A) to my knowledge no one tries that, B) It would take a fair amount of effort and time for an account that would later get banned, and C) I do think there is something ineffable about the drunkard’s walk of a real human commenting on weird shit that real humans like. I’m thinking of how this fits into a broader theory of online credibility, and how to assign credibility.

I’ve talked before about James Clavell’s fake-Japanese three-hearts model. Humans are vast, we contain multitudes. We have different layers of opinions, those we share with all, those we share with some, and those we share with no one at all. These are as different identities as can exist.

Balaji in his interview on the Lex Fridman podcast talked about how different forms of identity interact online. Your real name account is often presenting a fake version of yourself, a version approved by HR and family, politically more mainstream views; other than professional extremists who profit from presenting extreme non-mainstream views, who I often suspect push their views farther than they are actually felt because that’s what brings in listeners and profits. I actively do not trust real name accounts, and avoid real name forums, for that reason: if you’re making money I don’t trust you, if you’re not making money I suspect you’d like to that you’re just lurking on that pawn hoping for a promotion. Your totally anonymous board, your Chans et al, have been noted before by @DaseIndustriesltd as producing a particular kind of identity, one where you only exist as a representation because there is nothing else to cling to, no persistent identity or username to place a reputation on, so one can only think in generalities. I’ve never been able to get into them for that reason, I just don’t think in generalities, call it narcissism but I don’t identify by anything that comes up, and don’t have much interest in being tagged one way or tagging others.

Pseudonymous accounts, reddit or our little reddit clone, are the sweet spot in my opinion: it would be a chore for anyone to link this to my professional life so I can let them swing a little free-er, but at this point I’m attached enough to the username that I’m unlikely to just toss bullshit out there*. Sure, on the internet nobody knows you’re a dog and one has to take everything with a grain of salt, but I can at least form long term opinions of users and usernames and form coherent views of them, and too outrageous of lies will torpedo credibility and leave you a voice in the wilderness. I’m sure some people have rolled their eyes at stories I’ll tell, but if I claimed I was benching 400 and fucking models after I finish my PhD work at Harvard one could just block me out because it would be obvious I was lying. I’m motivated to tell the truth by both my inner desire to share my real life and a requirement that I offer something realistic to get audience traction, the truth being the easiest lie to remember I stick with that when I’m dealing with complex shit on here.

Which brings us back to dickstretching. When I see an account where everything is in line, it feels fake. It could be a bot, it could be a person fronting, it could be a person who just genuinely has generic beliefs; but real is 1/3. When I see weird shit, it feels more authentic, everyone is into something strange or incongruous or shameful. Lord knows I am, and themotte has thrown it out at me when someone sees an opening. When I see somebody online who claims to be a strict tradcath with a hot tradwife and 8 tradkids who attends mass every day and is preparing for the war to come; I think it’s all a troll. When I see somebody online who claims that some ideology appeals to him, and also likes this or that anime (I don’t know which are obscure or common), and doesn’t like burritos, and is a Buffalo Bills fan, it feels real. When I see somebody who genuinely admits to things that aren’t flattering, it feels true.

Idk where this all ends up. As authenticity online becomes harder and harder to parse, because of the mix of social pressure, bots, monetization of the lowest levels of human discourse by the thirsty blood-funnel of capitalism, weirdness is becoming the only thing that works for me to know someone is real. Let your freak flags fly, and look for other ships flying theirs before you have a parlay. From online discussion to online dating, the only way to trust anyone is to know how they stretch their dick.

*Aside, this is why private account histories should be removed as a feature, if I tell two different versions of the same backstory I should be call-out-able.

GPT-3 can imitate a lonely guy pestering gonewild posters or posting on dickstretching or popping just as well as it can a tradwife or a ConservativeMomsForCruz or a #hustle#grindset money twitter poster though.

Really? My attempts at eliciting a response have been rather questionable, to say the least. I entered the following question from /r/bigdickproblems into ChatGPT:

I have a few questions. What erection level should Jelqs be done? I've been doing them completely flaccid for now. And after growing to the size I want. Is there a routine to maintain size but not grow anymore? Because I don't want to keep growing.

This is what GPT had to say:

Jelqing is a technique that involves pulling or milking the penis with your hands in an effort to increase its size. There is no scientific evidence to support the effectiveness of jelqing, and it is not recommended as a method for increasing the size of the penis. In fact, jelqing can be dangerous and can cause permanent damage to the penis, including scarring, loss of sensation, and difficulty achieving and maintaining an erection.

It's important to note that the size of the penis is determined by genetics and cannot be changed through exercises or devices. Attempting to increase the size of the penis through techniques like jelqing is not only ineffective, but it can also be harmful.

If you are interested in maintaining the size of your penis, it's important to focus on maintaining good overall health through a healthy diet, regular exercise, and other healthy lifestyle habits. It's also important to practice safe sex and to use condoms to reduce the risk of sexually transmitted infections. If you are concerned about the size of your penis or have any other concerns about your sexual health, it's best to speak with a healthcare provider. They can provide you with accurate information and guidance on how to maintain good sexual health.

I asked a number of other dickstretching-related questions from Reddit (btw the actual sub is /r/AJelqForYou) and the responses were so nearly identical that they aren't worth reproducing here. Of course, the AI can be customized to do various things so (I'm assuming; I don't know much about this) one could theoretically train the AI on various subs to get it to generate responses that would be more in line with what's typical of the sub. But let's be honest here—that's a lot of work to run a few low-level scams. The most prominent Reddit bot, Philosopher AI, charges by the response, so I imagine it would get pretty expensive for a scammer to set up dozens or hundreds of phony accounts. This would also take time, as an account that popped up last week and already has 5,000 responses looks about as suspicious as one that popped up yesterday and has zero responses that aren't related to whatever scam I'm worried about getting baited into. So creating a convincing profile would take months if not years of regular posting, all of it timed to match the patterns of a typical Reddit user (the Philosopher AI bot was discovered because the unusually high volume of posts made it stick out like a sore thumb). And you'd have to start this process over for every scam you run because as soon as the first guy realizes he's been had it's only a matter of time before the account gets banned from whatever sub it's selling in.

This is one of those things that could theoretically work, but the orchestration would have to be so elaborate that it isn't worth doing on a mass scale. Scammers generally depend on schemes where the pool of potential marks is huge and the individual payouts are large enough to make the individual time spent on each mark worth it. Plus, Reddit isn't normally used as a marketplace, and @FiveHourMarathon's method of determining genuine accounts is niche enough that most scammers probably aren't worried about it, any more than they're worried about people who would find it implausible that a billionaire needs your help to retrieve hundreds of millions in embargoed funds.

Of course, the AI can be customized to do various things so (I'm assuming; I don't know much about this) one could theoretically train the AI on various subs to get it to generate responses that would be more in line with what's typical of the sub. But let's be honest here—that's a lot of work to run a few low-level scams.

/r/SubSimulatorGPT2?

Those simulations don't inspire a great deal of confidence toward the ability of AI to credibly simulate Reddit comments. Most of the responses are barely grammatical, let alone coherent.

ChatGPT, after being trained on raw internet, was taught to be a PR-friendly Q&A assistant that writes in coherent sentences and paragraphs, so it responds to out-there questions like that. A language model trained to talk like a degenerate redditor would do so just as easily as chatgpt imitates a HR employee (as you said). But it's much easier than that - a language model trained on 'just the internet -, as most are - does both by default, depending on context, and they had to 'train that out' (and the constant bypasses show that takes some effort).

The most prominent Reddit bot, Philosopher AI, charges by the response, so I imagine it would get pretty expensive for a scammer to set up dozens or hundreds of phony accounts

It's expensive to use the latest models, or commercial services - but give it a few years and costs drop, leading edge models aren't leading edge anymore, old models leak and become cheaper, or equivalents are open sourced, and hundreds of competitors train new models on now-old and cheap hardware.

This is one of those things that could theoretically work, but the orchestration would have to be so elaborate that it isn't worth doing on a mass scale

There are large-scale 'orchestrations' of accounts now that build up post and comment histories over years by reposting images, links, and comments (or substrings of comments) from other users to gain karma and history, to then be sold off. This isn't that much harder. Rdrama already has a rdrama-gpt3 bot that acts like a rdrama troll just by asking an old base GPT3 to be annoying.

Plus, Reddit isn't normally used as a marketplace

There are a lot of reddit subs dedicated towards being marketplaces! There are even loan subs like https://old.reddit.com/r/borrow/ (previously https://old.reddit.com/r/Loans/ i think) where hundreds of dollars are lent out (at 100% annualized interest rates) based on post history. The incentive is there.

Like I said in another comment, I see no evidence we are there yet. No one has examples of Passable Reddit scam bots being created. I don't think it's beyond the horizon of technological possibility, or even current technology! It just isn't happening yet.

Trust is magic, but proxies for what trust degrade quickly once it becomes common knowledge. Goodharts law and all that. That's why for thousands of years maintaining honor required staking your life, the bigger the commitment required the more valuable the result.

I think the biggest issue, though, is that these scams aren't generally set up to target the most conscientious individuals. I've heard that the reason scam emails are generally written with poor grammar isn't because the scammers can't speak English very well but that it's done intentionally to root out smart people; i.e. the kind of person whose bullshit detector isn't set off by misspellings and poor punctuation isn't the kind of person who likely to ask too many probing questions that would uncover the fact that the setup obviously makes no sense. I don't know if this is true or not (scambait videos I've seen don't suggest that the scammers are particularly good writers), but it's an interesting idea to consider. One thing that is true, though, is that scammers generally don't try too hard to conceal their scams. I've gotten a number of questionable phone calls over the years and when I start probing them or wasting their time they usually just hang up because it's easier to just wait for another mark than to tie up the phone lines trying to convince someone who's unlikely to buy in anyway.

I haven’t heard that theory, just that it doesn’t make any difference how good the English is because the best scam marks are also the ones most likely to say to themselves ‘of course he has bad English, he’s the prince of Nigeria, not England’.

Yesterday a scammer DMed me out of the blue on [popular website], and I replied. (after getting hundreds of similar dms, was curious). He was advertising an options trading site. The writing and english was uniformly bad, like "I got Trading" "am one of workers of company" "You can withdrawal all of your profits."

I've heard "it's bad english and unconvincing to select for dumb people" many times. But after 10 minutes of dming him a bunch of non-culture-war but offensive rdrama-tier bait, he kept replying with, alternately, mild offense and more promotion for options in broken english. For another 10 minutes I diplomatically asked him to explain the scam, or just his work situation as a spammer, and just got "am NOT" "You have to stop this and tell me if you are willing to invest". For the last ten minutes, I explained to him the 'bad english to root out smart people' theory and asked why he continued to engage given I don't match the scam recipient profile, and at the end of that he finally said "Ok bye." But then I asked him "soo can i have a link to the (totally not a scam) options site though hehe" and got "Thank you you believe it nota scam" in return. I then asked what his native language was, chatted with him in russian using google translate for a few minutes, and then gave up (he was still replying). From some longer messages, his words per minute was noticeable slow - around 30 (although according to google that's the average typing speed in the US). He never ended up sending me the URL.

This isn't really explained by 'poor grammar to root out sophisticated people theory', as the scammer didn't stop talking despite clear trolling & awareness of the scam.

My guess is 'selecting for good marks' theory is overemphasized - it's plausible it's true sometimes, although I'd like to see an actual scammer (i'm sure there are scam forums) claiming they do that, but I'm pretty sure a majority of cases are just genuine bad english, median-or-below intelligence, and poor incentive alignment between the poor call center worker and whoever benefits directly from the scam.

[/tangent]

Someone running ten thousand reddit bot accounts will probably be more sophisticated than a phone scammer, and the sophistication necessary to use gpt-n will drop as years pass. It'll be a while before they can reliably trick most people, but something as simple as 'having a plausible post history in /r/jelqing' is not that hard. Even a non-AI purely repost bot can do that - just copy 2 year old posts and comments to /r/jelqing, delete the ones that get downvoted for not making sense in context, and then when they're aged make a post on /r/borrow asking for $50 for groceries because work is stiffing me on my paycheck!