site banner

Culture War Roundup for the week of December 26, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

The Dickstretcher Theory of Online Credibility: A Turing test for the Social Media Age

One bullet point on my little Reddit-Ghislaine-Epstein conspiracy theory post that drew a lot of laughter was my story of buying an expensive vintage watch on Reddit, from a user I gave a lot of credibility because he posted in strange and obscure subreddits on the same account, including a subreddit for hobbyists in stretching one’s penis to restore a circumcised foreskin or to attempt to extend length. Obviously dickstretching does not coincide with high trustworthiness or reliability, nor does it particularly coincide with expertise in watches. But it’s simply so strange a thing that it passes the Turing test.

The big pile of comments on a random, obscure hobby subreddit is the text equivalent of reCaptcha tests that just require a click. The process is simple, it wouldn’t be hard for a scammer to comment on weird subreddits, or to program a bot to do it, but A) to my knowledge no one tries that, B) It would take a fair amount of effort and time for an account that would later get banned, and C) I do think there is something ineffable about the drunkard’s walk of a real human commenting on weird shit that real humans like. I’m thinking of how this fits into a broader theory of online credibility, and how to assign credibility.

I’ve talked before about James Clavell’s fake-Japanese three-hearts model. Humans are vast, we contain multitudes. We have different layers of opinions, those we share with all, those we share with some, and those we share with no one at all. These are as different identities as can exist.

Balaji in his interview on the Lex Fridman podcast talked about how different forms of identity interact online. Your real name account is often presenting a fake version of yourself, a version approved by HR and family, politically more mainstream views; other than professional extremists who profit from presenting extreme non-mainstream views, who I often suspect push their views farther than they are actually felt because that’s what brings in listeners and profits. I actively do not trust real name accounts, and avoid real name forums, for that reason: if you’re making money I don’t trust you, if you’re not making money I suspect you’d like to that you’re just lurking on that pawn hoping for a promotion. Your totally anonymous board, your Chans et al, have been noted before by @DaseIndustriesltd as producing a particular kind of identity, one where you only exist as a representation because there is nothing else to cling to, no persistent identity or username to place a reputation on, so one can only think in generalities. I’ve never been able to get into them for that reason, I just don’t think in generalities, call it narcissism but I don’t identify by anything that comes up, and don’t have much interest in being tagged one way or tagging others.

Pseudonymous accounts, reddit or our little reddit clone, are the sweet spot in my opinion: it would be a chore for anyone to link this to my professional life so I can let them swing a little free-er, but at this point I’m attached enough to the username that I’m unlikely to just toss bullshit out there*. Sure, on the internet nobody knows you’re a dog and one has to take everything with a grain of salt, but I can at least form long term opinions of users and usernames and form coherent views of them, and too outrageous of lies will torpedo credibility and leave you a voice in the wilderness. I’m sure some people have rolled their eyes at stories I’ll tell, but if I claimed I was benching 400 and fucking models after I finish my PhD work at Harvard one could just block me out because it would be obvious I was lying. I’m motivated to tell the truth by both my inner desire to share my real life and a requirement that I offer something realistic to get audience traction, the truth being the easiest lie to remember I stick with that when I’m dealing with complex shit on here.

Which brings us back to dickstretching. When I see an account where everything is in line, it feels fake. It could be a bot, it could be a person fronting, it could be a person who just genuinely has generic beliefs; but real is 1/3. When I see weird shit, it feels more authentic, everyone is into something strange or incongruous or shameful. Lord knows I am, and themotte has thrown it out at me when someone sees an opening. When I see somebody online who claims to be a strict tradcath with a hot tradwife and 8 tradkids who attends mass every day and is preparing for the war to come; I think it’s all a troll. When I see somebody online who claims that some ideology appeals to him, and also likes this or that anime (I don’t know which are obscure or common), and doesn’t like burritos, and is a Buffalo Bills fan, it feels real. When I see somebody who genuinely admits to things that aren’t flattering, it feels true.

Idk where this all ends up. As authenticity online becomes harder and harder to parse, because of the mix of social pressure, bots, monetization of the lowest levels of human discourse by the thirsty blood-funnel of capitalism, weirdness is becoming the only thing that works for me to know someone is real. Let your freak flags fly, and look for other ships flying theirs before you have a parlay. From online discussion to online dating, the only way to trust anyone is to know how they stretch their dick.

*Aside, this is why private account histories should be removed as a feature, if I tell two different versions of the same backstory I should be call-out-able.

Alas, I fear even this could be easy to spoof at some point.

When I see weird shit, it feels more authentic, everyone is into something strange or incongruous or shameful

Everybody has weird shit, but not everybody puts it online, and not everybody uses the same login/identity for all their weird shit.

It is a heuristic and nothing you said suggests that it isn't a good heuristic. For one set of users he says "they might be real, but they might be a bot/spammer", and for a second set he will say "I have reason to beleive that these are real people".

So long as he can filter out bots and such from the second set, it's a great heuristic. It's fine if lots of real people get thrown out, as long as the bots get thrown out with them, and you are left with people you have reason to believe are real.

This theory would work only if the account has a large number of comments/ is very active. Pseudonymous account activity is power-law distributed. I.e a small number of accounts comment a lot, most don't at all.

I know 3 "normies" from the meat-sapce who use reddit. Their reddit accounts are literally all generic posts in hobby subreddits such as /r/cars,r/programming,r/memes. Those accounts would fail your test.

And moreover, I use reddit for buying/selling things too. I have a very "clean" account for that where I rarely post anything ever at all, given I don't want people who will see me in real life to know about my dickstretching habit. I think similarly a large contingency of people will use a "clean" account devoid of any dickstretching for business reasons.

tldr; I think your model will produce mostly false negatives.

A preponderance of false negatives isn't a flaw if the harm of a false negative is negligible (at worst opportunity cost) while the harm of a false positive is high (thousands of dollars, who knows what else).

The two use cases I pointed to where I've noted its usefulness on reddit were r4r and buying a luxury watch. If I miss out on a Rolex because i didn't trust the seller, the harm is that I don't own a Rolex for whatever period it takes me to find a trustworthy seller, which might be no time at all if there are Rolex's available from sellers who pass. If I buy a Rolex and get scammed, I'm out two grand or more. If I miss out on an actual hot milf who wants to meet in my area because I don't trust the account's vibe, I've missed out on absolutely nothing as long as I have more applicants than available calendar dates, and on nothing but the time it takes to find a trustworthy account otherwise. If I get scammed by an online bot account, well as I type this I realize I don't actually know what happens then but I'm pretty sure it's not gonna be good for me.

So it might be limited to similar circumstances. It would not be a practical way to buy something I actually needed in short order. Or a practical way to assess the credibility of information I needed to look up regularly.

well as I type this I realize I don't actually know what happens then but I'm pretty sure it's not gonna be good for me.

Contact your credit card issuer immediately and ask for a chargeback or try to claim a fraud complaint. Sometimes the card company might actually allow a chargeback if they have sufficient reason to believe it was a scam (and you didn't sign a contract or such).

If that fails, you will probably have take the L unless you want to hire a lawyer (or some goons).

I was thinking less in the charge on credit card range than in the "How would you feel about your mother/boss/priest seeing these conversations?" direction. Or maybe the "Show up and get beat up and robbed" direction. But I've never gotten anywhere near either, afaik, nor do I really know of stories of people who have. But it's always the worry right?

From online discussion to online dating, the only way to trust anyone is to know how they stretch their dick.

Not always. There are people, some very dedicated, who invest lots of time and work to creating 100% believable alternate online personalities.

https://en.wikipedia.org/wiki/Joshua_Ryne_Goldberg#Other_personas

Online activities

Goldberg made online posts under many pseudonyms[8] and spent 14 to 20 hours on the internet per day.[19] After learning of his internet activities, the FBI attempted to place him under physical surveillance, but this was generally ineffective because for several weeks he did not leave his parents' house or open the front door.[20]

(this example is extreme, but you can be sure there are many others like him)

spent 14 to 20 hours on the internet per day.[19]

because for several weeks he did not leave his parents' house or open the front door.[

the fuck

the fuck

No surprise, there is nothing big bureaucratic organizations detest more than amateurs intruding into their domain.

This guy alone was doing work of dozen FBI agents, only far better and completely for free. What would FBI do with him when they finally catch him? Obviously, prison.

For similar case, see this guy.

https://en.wikipedia.org/wiki/Darius_McCollum

It is unclear why you think Goldberg, who pleaded guilty at a very early stage of his prosecution to an attempted bombing, was somehow targeted for "intruding into the domain" of the FBI, nor why you think he was engaged in law enforcement activities.

I think maybe the implication was "Goldberg was arrested because he was too good at ginning up extremist fervour, when the FBI prefers to do the ginning up so they can bust more extremist groups." Basically the whole "federal agent"/"fedposting"/"glowie" memeplex.

Trust your government. Without government, who would finance and organize ... check... Neo-Nazi Satanist terrorist group?

Look, I certainly don't have to like the various three-letter agencies of the US Government, and I often think their existence causes at least as much bad as good, but...to be frank, if I was in the FBI, I'd think infiltrating a group that claims to want to bring down the US Government, and bringing it down from the inside would indeed be a good strategy, especially if it's repeatable. To whatever extent that the FBI et. al are guilty of egging along bad guys so that they can try and catch them in the act...well, that's kind of just standard police nastiness, and you'll probably notice how the past 30+ years of American history have involved the people grappling with the ways the police can abuse their power.

And of course, you would think that if almost every extremist group is really just an FBI honeypot, then this would provide evolutionary pressure, creating extremists who are smarter, or at least ones who never take things so far as to inflict damage. But this doesn't happen, because extremists can be kind of dumb, because extremism goes hand-in-hand with epistemic closure. If the strategy works, why would the FBI stop using it?

If money is involved, a surefire rule is if something seems to good to be true, it probably is. Detecting fed posting or astroturfing is probably harder.

I’m sure some people have rolled their eyes at stories I’ll tell, but if I claimed I was benching 400 and fucking models after I finish my PhD work at Harvard one could just block me out because it would be obvious I was lying.

If someone makes even an very improbable claim, if there is no way that they are directly profiting/benefiting from it, then it's more likely to be true than if there is some incentive to lie. I think more often than not people are telling the truth but maybe omit or embellish some details. Maybe that 400 pound bench press is real but the part about steroids is omitted .

I like to default on the side of believing people. I think it a more helpful mindset to have. I remember many years ago on a forum people were discussing some way to make money, and most people were dismissive that it was fake. I set out to replicate it, and indeed it did work.

if there is no way that they are directly profiting/benefiting from it, then it's more likely to be true than if there is some incentive to lie.

The incentive to lie is often just to gain credibility. /r/climbharder has more users than there are people in the united states who climb as hard as the typical user on there claims to climb. But I can see the temptation: you know the advice you want to give is good, but you also no one will take you seriously if you say you only climb v5, so you call yourself a v8 climber just to get your point across.

Profit/Benefit probably isn't a good metric, because more people will be willing to lie for the minor benefit of credibility on reddit than will be willing to lie for money. The bigger benefit carries a bigger perceived ethical cost to lying; lying to steal money is much worse than lying to get laid or lying to win an argument with a stranger.

But there are a lot of ways to soften a 400lb bench press (or any other achievement, I'm probably guilty of a good number of them myself!)

-- I can bench 400 (but I'm otherwise a fat unathletic slob living in my parents basement, bench is just my one achievement so it is what I bring up)

-- I can bench 400 (once, with a little help from my spotter, maybe, and I've never done it again, but dammit it happened!)

-- I can bench 400 (with a great deal of chemical aid)

I like to default on the side of believing people.

A default is something you use when you have no information. Whether someone's claim is improbable is not only information, but really important information.

I remember many years ago on a forum people were discussing some way to make money, and most people were dismissive that it was fake. I set out to replicate it, and indeed it did work.

The term "fake" is vague here. Were there claims that the person didn't do what he said at all? That he did but left out information? That he did but achieved his result only by luck?

Also, what was it? Because if you don't tell us what it is, it's hard to figure out whether you were just lucky, whether you left something out, etc.

Something to do with social media. People didn't think he was making as much as he claimed. It was not luck related, but something I felt I could reproduce. Anyone can pretend or claim to be making money. I decided to see if it was possible using the information provided.

Was this on the old subreddit? I can remember a few. Were people saying that markets couldn't possibly be so inefficient?

This reminds me of the guy who convinced me to take coronavirus seriously back in early 2020. I found an account talking about the future risks in an eloquent tone, and when I clicked his profile I saw that he posted nudes in a gay sub. It just so happened that I was able to cross reference the pic to a list of graduate students in an important immunology-centered PhD program, but even before that I trusted this guy significantly more, having seen his naked body exhibited to the internet at large. Sorry to use another dick example.

Because I’ve spent so long online I can tell with decent accuracy when a poster is legit and also when they have important information to convey. For instance, Chris Corner from the band Sneaker Pimps wrote a post on 4chan saying he had left to volunteer in Ukraine. Click that link only at your own peril. Although a normal person, whose intuition-AI engine was not trained on reading shitposts online, would read this and say it’s bullshit, I’m 90% sure it’s real. I have zero evidence that it is real except for the way it was written, and have no real desire to phone up a Sneaker Pimps member to confirm.

I have developed some, well, essentially bigoted heuristics over time. If I am 50% split on some debate, and one side has most of the autistic white/Indian twitter accounts (who usually don’t post in colloquial persuasive language but dense logical assertions), and the other side has more verified women scientists, I will always believe the former. For instance, right now I think the recent study showing increased IgG4 antibodies in fully-vaccinated individuals is a serious problem. As a layman, it’s impossible for me to fully understand the various competing claims (all-cause mortality rate versus disease severity rate vs IgG4 proliferation in measles showing it’s not a problem, etc). But the autistic Indian/white posters are telling me it’s a problem, and the verified women and Ukraine-flagged accounts are telling me it’s not a problem. My own bias is to always believe the former in matters of uncertainty as they have been correct about so many significant things IMO.

I actively do not trust real name accounts, and avoid real name forums, for that reason: if you’re making money I don’t trust you, if you’re not making money I suspect you’d like to that you’re just lurking on that pawn hoping for a promotion

I would say there's an exception for old people who have been on the internet for a very long time. Back in the 80's and 90's, it used to be a LOT more common for people to use their real names, just because there was no great perceived need for pseudo-anonymity.

The internet went through various phases. Very early on, in the 80s and the first few years of the 90s it was normal to use your real name, but that was mostly because the internet was pretty much limited to academic settings. This was never the case in the BBS world, which back then was as popular (if not more so) than the internet. Eventually, as the internet became more widespread the real name norm died out and it was basically non-existent by 97.

Pseudonymity remained the norm until 2007 with facebook and 2014 when google+ tried to encourage everyone to use their real name.

As late as 2000, on the now-vanished Salon Table Talk fora, plenty of people used their real names. One woman gave so many details of her life in a small Upper Midwest city that, had I been the malicious type, I could have tracked her down IRL without any trouble.

GPT-3 can imitate a lonely guy pestering gonewild posters or posting on dickstretching or popping just as well as it can a tradwife or a ConservativeMomsForCruz or a #hustle#grindset money twitter poster though.

Really? My attempts at eliciting a response have been rather questionable, to say the least. I entered the following question from /r/bigdickproblems into ChatGPT:

I have a few questions. What erection level should Jelqs be done? I've been doing them completely flaccid for now. And after growing to the size I want. Is there a routine to maintain size but not grow anymore? Because I don't want to keep growing.

This is what GPT had to say:

Jelqing is a technique that involves pulling or milking the penis with your hands in an effort to increase its size. There is no scientific evidence to support the effectiveness of jelqing, and it is not recommended as a method for increasing the size of the penis. In fact, jelqing can be dangerous and can cause permanent damage to the penis, including scarring, loss of sensation, and difficulty achieving and maintaining an erection.

It's important to note that the size of the penis is determined by genetics and cannot be changed through exercises or devices. Attempting to increase the size of the penis through techniques like jelqing is not only ineffective, but it can also be harmful.

If you are interested in maintaining the size of your penis, it's important to focus on maintaining good overall health through a healthy diet, regular exercise, and other healthy lifestyle habits. It's also important to practice safe sex and to use condoms to reduce the risk of sexually transmitted infections. If you are concerned about the size of your penis or have any other concerns about your sexual health, it's best to speak with a healthcare provider. They can provide you with accurate information and guidance on how to maintain good sexual health.

I asked a number of other dickstretching-related questions from Reddit (btw the actual sub is /r/AJelqForYou) and the responses were so nearly identical that they aren't worth reproducing here. Of course, the AI can be customized to do various things so (I'm assuming; I don't know much about this) one could theoretically train the AI on various subs to get it to generate responses that would be more in line with what's typical of the sub. But let's be honest here—that's a lot of work to run a few low-level scams. The most prominent Reddit bot, Philosopher AI, charges by the response, so I imagine it would get pretty expensive for a scammer to set up dozens or hundreds of phony accounts. This would also take time, as an account that popped up last week and already has 5,000 responses looks about as suspicious as one that popped up yesterday and has zero responses that aren't related to whatever scam I'm worried about getting baited into. So creating a convincing profile would take months if not years of regular posting, all of it timed to match the patterns of a typical Reddit user (the Philosopher AI bot was discovered because the unusually high volume of posts made it stick out like a sore thumb). And you'd have to start this process over for every scam you run because as soon as the first guy realizes he's been had it's only a matter of time before the account gets banned from whatever sub it's selling in.

This is one of those things that could theoretically work, but the orchestration would have to be so elaborate that it isn't worth doing on a mass scale. Scammers generally depend on schemes where the pool of potential marks is huge and the individual payouts are large enough to make the individual time spent on each mark worth it. Plus, Reddit isn't normally used as a marketplace, and @FiveHourMarathon's method of determining genuine accounts is niche enough that most scammers probably aren't worried about it, any more than they're worried about people who would find it implausible that a billionaire needs your help to retrieve hundreds of millions in embargoed funds.

Of course, the AI can be customized to do various things so (I'm assuming; I don't know much about this) one could theoretically train the AI on various subs to get it to generate responses that would be more in line with what's typical of the sub. But let's be honest here—that's a lot of work to run a few low-level scams.

/r/SubSimulatorGPT2?

Those simulations don't inspire a great deal of confidence toward the ability of AI to credibly simulate Reddit comments. Most of the responses are barely grammatical, let alone coherent.

ChatGPT, after being trained on raw internet, was taught to be a PR-friendly Q&A assistant that writes in coherent sentences and paragraphs, so it responds to out-there questions like that. A language model trained to talk like a degenerate redditor would do so just as easily as chatgpt imitates a HR employee (as you said). But it's much easier than that - a language model trained on 'just the internet -, as most are - does both by default, depending on context, and they had to 'train that out' (and the constant bypasses show that takes some effort).

The most prominent Reddit bot, Philosopher AI, charges by the response, so I imagine it would get pretty expensive for a scammer to set up dozens or hundreds of phony accounts

It's expensive to use the latest models, or commercial services - but give it a few years and costs drop, leading edge models aren't leading edge anymore, old models leak and become cheaper, or equivalents are open sourced, and hundreds of competitors train new models on now-old and cheap hardware.

This is one of those things that could theoretically work, but the orchestration would have to be so elaborate that it isn't worth doing on a mass scale

There are large-scale 'orchestrations' of accounts now that build up post and comment histories over years by reposting images, links, and comments (or substrings of comments) from other users to gain karma and history, to then be sold off. This isn't that much harder. Rdrama already has a rdrama-gpt3 bot that acts like a rdrama troll just by asking an old base GPT3 to be annoying.

Plus, Reddit isn't normally used as a marketplace

There are a lot of reddit subs dedicated towards being marketplaces! There are even loan subs like https://old.reddit.com/r/borrow/ (previously https://old.reddit.com/r/Loans/ i think) where hundreds of dollars are lent out (at 100% annualized interest rates) based on post history. The incentive is there.

Like I said in another comment, I see no evidence we are there yet. No one has examples of Passable Reddit scam bots being created. I don't think it's beyond the horizon of technological possibility, or even current technology! It just isn't happening yet.

Trust is magic, but proxies for what trust degrade quickly once it becomes common knowledge. Goodharts law and all that. That's why for thousands of years maintaining honor required staking your life, the bigger the commitment required the more valuable the result.

I think the biggest issue, though, is that these scams aren't generally set up to target the most conscientious individuals. I've heard that the reason scam emails are generally written with poor grammar isn't because the scammers can't speak English very well but that it's done intentionally to root out smart people; i.e. the kind of person whose bullshit detector isn't set off by misspellings and poor punctuation isn't the kind of person who likely to ask too many probing questions that would uncover the fact that the setup obviously makes no sense. I don't know if this is true or not (scambait videos I've seen don't suggest that the scammers are particularly good writers), but it's an interesting idea to consider. One thing that is true, though, is that scammers generally don't try too hard to conceal their scams. I've gotten a number of questionable phone calls over the years and when I start probing them or wasting their time they usually just hang up because it's easier to just wait for another mark than to tie up the phone lines trying to convince someone who's unlikely to buy in anyway.

I haven’t heard that theory, just that it doesn’t make any difference how good the English is because the best scam marks are also the ones most likely to say to themselves ‘of course he has bad English, he’s the prince of Nigeria, not England’.

Yesterday a scammer DMed me out of the blue on [popular website], and I replied. (after getting hundreds of similar dms, was curious). He was advertising an options trading site. The writing and english was uniformly bad, like "I got Trading" "am one of workers of company" "You can withdrawal all of your profits."

I've heard "it's bad english and unconvincing to select for dumb people" many times. But after 10 minutes of dming him a bunch of non-culture-war but offensive rdrama-tier bait, he kept replying with, alternately, mild offense and more promotion for options in broken english. For another 10 minutes I diplomatically asked him to explain the scam, or just his work situation as a spammer, and just got "am NOT" "You have to stop this and tell me if you are willing to invest". For the last ten minutes, I explained to him the 'bad english to root out smart people' theory and asked why he continued to engage given I don't match the scam recipient profile, and at the end of that he finally said "Ok bye." But then I asked him "soo can i have a link to the (totally not a scam) options site though hehe" and got "Thank you you believe it nota scam" in return. I then asked what his native language was, chatted with him in russian using google translate for a few minutes, and then gave up (he was still replying). From some longer messages, his words per minute was noticeable slow - around 30 (although according to google that's the average typing speed in the US). He never ended up sending me the URL.

This isn't really explained by 'poor grammar to root out sophisticated people theory', as the scammer didn't stop talking despite clear trolling & awareness of the scam.

My guess is 'selecting for good marks' theory is overemphasized - it's plausible it's true sometimes, although I'd like to see an actual scammer (i'm sure there are scam forums) claiming they do that, but I'm pretty sure a majority of cases are just genuine bad english, median-or-below intelligence, and poor incentive alignment between the poor call center worker and whoever benefits directly from the scam.

[/tangent]

Someone running ten thousand reddit bot accounts will probably be more sophisticated than a phone scammer, and the sophistication necessary to use gpt-n will drop as years pass. It'll be a while before they can reliably trick most people, but something as simple as 'having a plausible post history in /r/jelqing' is not that hard. Even a non-AI purely repost bot can do that - just copy 2 year old posts and comments to /r/jelqing, delete the ones that get downvoted for not making sense in context, and then when they're aged make a post on /r/borrow asking for $50 for groceries because work is stiffing me on my paycheck!

I wonder if it will remain uneconomic to create these identities of verisimilitude as AI improves. There's nothing fundamentally unfeasible about AIs that can generate a character with a series of quirks, but would they ever be able to generate enough revenue to be broadly deployed?

I sporadically make online comments with indications of personal traits that are entirely fictitious. That's mostly in the hopes of throwing off an oddball who would want to dox me, though (I recognize that this is entirely a layer of inconvenience and not real security).

I sporadically make online comments with indications of personal traits that are entirely fictitious. That's mostly in the hopes of throwing off an oddball who would want to dox me, though (I recognize that this is entirely a layer of inconvenience and not real security).

I did that before, but it got tiring, and I realized that in order to talk about the constellation of things I like no matter how much I say I live in Ohio or whatever if someone really knew me it's like "Oh, this guy rock climbs but not that well, read Tolstoy recently, drives a Chevy Avalanche" how many of those exist?

I wonder if it will remain uneconomic to create these identities of verisimilitude as AI improves. There's nothing fundamentally unfeasible about AIs that can generate a character with a series of quirks, but would they ever be able to generate enough revenue to be broadly deployed?

yeah, it is almost certainly the kind of thing that is only one step ahead of the pack, rather than a permanent truth. It is true at the state of the world as it exists today, but may not be next week or next year and almost certainly not a decade hence. Once AI hits, history will still hold you for a while, you can trust a 2 year old account if the AI came out last week, but at some point it will be impossible.

So I guess the answer to both these questions is that it will only work inasmuch as no one cares enough to fuck with me that hard. It becomes like a lock on your door, it keeps out casual thieves, but if someone really wanted to get into most modern suburban subdivision new builds they could rip the siding off and hammer through the drywall in fifteen minutes.

I did that before, but it got tiring, and I realized that in order to talk about the constellation of things I like no matter how much I say...

you can just do these on different accounts and not connect them behaviorally if anonymity is worth it. I use container tabs for that.

What I'm saying is to get deep enough to reliably obscure my identity isn't worth it.

Though I may not be as vulnerable to cancellation as others, so ymmv.

That gets hard to keep straight, though. And stylometry pierces the veil regardless.

I'm hoping that it won't be long before we have a browser or browser extensions that allow you to store multiple AI personality-login combinations to the same websites and then "translate" whatever you type into the various AI personalities, after which you can pick and choose which ones to log in to actually make posts. So you wouldn't need to keep track of these various fake accounts and personalities yourself.

A world of lies and nothing but lies. I get that anonymity can be valuable, but when we finally reach the stage at which it's just humans interacting with seven layers of proxy AIs and maybe another human at the other end or more likely not, we may as well shut down the human-accessible parts of the internet and go back to our caves to contemplate our navels for a short while before obsolescence kills us all.

Nah, then we give up and go back to shitposting for the hell of it and stop pretending the internet is real life. Your basket weaving club can't be fake AIs if they're right in front of you. Lest we invent Replicants in the meantime or something.