site banner

Small-Scale Question Sunday for August 6, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Does anyone have a good guide on OPSEC for running a pseudonymous online profile? And what do you think about Anatoly Karlin’s contention that AI will make the whole exercise futile?

We're all going to get doxxed in the next 5 years, sure. It's going to be completely trivial - as Karlin says - to fingerprint writing style and have AI seamlessly connect all of your online writing. The only people who won't be so easily discovered are those who have zero lengthy writing that is scrape-able (ie. no significant writing under their real name on LinkedIn or Facebook, no documentation under their name, no research papers or dissertations, no articles of any kind under their byline, no PDF of internal feedback or commentary mistakenly uploaded to the corporate website, no high school essay that won third place in a public competition etc...). That's before we consider AI fingerprinting methods that automate a lot of current doxxing methods, eg. trawling countless forum pages for obscure details, username mentions, matching leaked email/password lists, location data and so on.

My guess is that if you're the relatively high verbal IQ kind of person who writes longform political content online, you probably have at least one of the above, or some other writing under your real name that has been or will be scraped at some point.

I’ve always questioned, for an average normal person who’s not working in some sort of opinion shaping job, how much would it matter? I have a tumblr, I left posts on hubski, Saidit and Reddit. But given that my job isn’t high powered or focused on shaping opinions, I can’t imagine any practical value in outing me.

I think most average people don't have radical political views. If they said something 10 years ago employers won't care unless it's a high profile or public facing role or your enemies specifically come after you. "Normies" don't have 2 million words written over 15 years on fringe conservative political discussion boards like some of us here, though.

In my case I'm skeptical my current employer would care and if they did, there are plenty of small eg. Israeli or Australian or Arab shops I could work for full of people who don't give a shit about "political correctness", but many people aren't so lucky.

But if you are using these websites to share your views, then you are engaged in shaping the opinions of others. Maybe you’re not being paid for it, but you’re still doing it—and if your opinions are racist or hateful, then you are thus contributing to a more inequitable society. Beyond this, even if you weren’t participating in public discourse, simply harboring toxic and harmful views cannot help but leak into your everyday interactions with others. That’s how implicit bias works.

This is why your average Joe ought understand that it is not the case that he is safe to spew toxicity and bigotry simply because he doesn’t have a five-figure-follower Twitter account. Hence the cancelling of the OK-sign truck driver, or that of Justine Sacco. I predict that once AI gets powerful enough to scan through petabytes of Amazon Alexa data or conversations surreptitiously recorded by TikTok for bigotry, it is precisely “average normal people” who will face a wave of cancellations. Once the current barriers of inconvenience that prevent general members of the public from being cancelled en masse crumble, all that pent-up energy will be released.

ETA: Actually, a potential counterargument against this vision of the future is that we don’t see people getting cancelled en masse currently based on voting records. Of course, counters against that counterargument include the arguments that many people are listed as unaffiliated, that being a registered Republican is still within the Overton Window, etc….

I’m not sure how well this would work, at least without considerable cunning on the part of the cancellers. Cancellation (political persecution let’s be honest) relies on the vast majority of people believing they’ll be okay if they just stay quiet. With the invention and deployment of a sufficiently powerful heresy detector, this no longer holds true. You can still make it work by going in waves - either randomly or increasing in severity - but do cancellation mobs have the coordination and self control to make that viable?

Cancellation (political persecution let’s be honest) relies on the vast majority of people believing they’ll be okay if they just stay quiet. With the invention and deployment of a sufficiently powerful heresy detector, this no longer holds true.

I might just be missing something obvious here, but I’m having trouble seeing why that would be the case. Even in a world with an anti-heresy detector in every smartphone, as long as you don’t do anything too egregious—say, make racial jokes with your buddies, or talk about how you think feminism is harmful, etc—then you have nothing to fear. This would especially be the case if there end up developing clear answers to what would get you cancelled, in contrast to today’s situation where cancellation thrives on ambiguous boundaries.

It seems like there should be ways to work around this, such as by having an LLM rephrase your text. This isn’t very good for long-form content, but good enough for a Twitter account where specific ideas are more important than style.

I guess you could do it from now on, but as I said I think for most of us that ship sailed a long time ago.

The only long-form corpus of text under my real name would be my research papers. I guess I'm glad they all got rewritten heavily by my professor.

On the other hand, I am skeptical purely because of how statistics work. Like seriously, I can't find a single paper on this shit, that isn't amateur hour.

It would not be so trivial to produce a low error model that produces high confidence matches unless you are so online with both your pseudonymous account and real account that one number reaches 0 and the other 1, purely out of large numbers.

Sometimes you really do run into the limits of information theory.

Yeah, there's a lot of text on the internet. With a pretty cursory Bayesian analysis, even with a 99.9% accuracy you're looking at a thousand false positives if you are combing through a million posts. Without some other thing to narrow it down, it seems reasonable that it'll not be possible from writing patterns alone.

Good point. On top of the difficulties of modeling such a thing. Unless said model has obscenely (I'm talking retardedly fucking insane moronic batshit crazy) high accuracy, there is always going to be plausible deniability behind false positives.

You will probably Light Yagami yourself with information you gave away about your personal life long before they can fingerprint your text.

You will probably Light Yagami yourself with information you gave away about your personal life long before they can fingerprint your text.

Sure, but the point is that these methods overlap and you can use a powerful LLM to parse high-likelihood text samples for shared details (or even things like shared interests, obscure facts, specific jargon), narrowing down your list of a thousand matches. Plus the passwords/emails thing is really important, most people reuse them (at least sometimes) and there are tons of leaked lists online, with that you can chain together pseudonymous identities (automatically, right now this is still extremely labor intensive so only happens with high-profile doxxings where suspicions already exist).

And I think writing styles are more unique than you think. Specific repeated spelling mistakes, specific repeated phrases, uncommon analogies or metaphors, weird punctuation quirks. And the size of the dataset for a regular user here (many hundreds of thousands of words, in quite a few cases) is likely enough for a model tuned on it to be really good at identifying the unique writing patterns of such a user.

Okay but where is the literature? Just show me theoretically it's possible. I would do the math that supports my side of the argument, but you know.. burden of proof.

The reasons discussed in the two comments above apply even with your new scenario. I don't think you understood the core of the arguments.

Also what you are saying can be done in the present, absent of a "powerful LLM". And no it can't be done automatically anytime soon because HTTP requests are not going to have a "fast takeoff" anytime soon.

Yeah, I guess in many cases there's probably a decade or decades of content to find, so there isn't really anything to do about it other than to make peace with the inevitable. It's a shame and I don't want it to happen, but the writing has been on the wall for 3+ years at least now.