@ymeskhout's banner p




7 followers   follows 0 users  
joined 05 Sep 2022


User ID: 696



7 followers   follows 0 users   joined 05 Sep 2022


No bio...


User ID: 696

I'll be honest: I used to think talk of AI risk was so boring that I literally banned the topic at every party I hosted. The discourse generally focused on existential risks so hopelessly detached from any semblance of human scale that I couldn't be bothered to give a shit. I played the Universal Paperclips game and understood what a cataclysmic extinction scenario would sort of look like, but what the fuck was I supposed to do about it now? It was either too far into the future for me to worry about it, or the singularity was already imminent and inevitable. Moreover, the solution usually bandied about was to ensure AI is obedient ("aligned") to human commands. It's a quaint idea, but given how awful humans can be, this is just switching one problem for another.

So if we set aside the grimdark sci-fi scenarios for the moment, what are some near-term risks of humans using AI for evil? I can think of three possibilities where AI can be leveraged as a force multiplier by bad (human) actors: hacking, misinformation, and scamming.

(I initially was under the deluded impression that I chanced upon a novel insight, but in researching this topic, I realized that famed security researcher Bruce Schneier already wrote about basically the same subject way back in fucking April 2021 [what a jerk!] with his paper The Coming AI Hackers. Also note that I'm roaming outside my usual realm of expertise and hella speculating. Definitely do point out anything I may have gotten wrong, and definitely don't do anything as idiotic as make investment decisions based on what I've written here. That would be so fucking dumb.)

Computers are given instructions through the very simple language of binary: on and off, ones and zeroes. The original method of "talking" to computers was a punch card, which had (at least in theory) an unambiguous precision to its instructions: punch or nah, on or off, one or zero. Punch cards were intimate, artisanal, and extremely tedious to work with. In a fantastic 2017 Atlantic article titled The Coming Software Apocalypse, James Somers charts how computer programming changed over time. As early as the 1960s, software engineers were objecting to the introduction of this new-fangled "assembly language" as a replacement for punch cards. The old guard worried that replacing 10110000 01100001 on a punch card with MOV AL, 61h might result in errors or misunderstandings about what the human actually was trying to accomplish. This argument lost because the benefits of increased code abstraction were too great to pass up. Low-level languages like assembly are an ancient curiosity now, having long since been replaced by high-level languages like Python and others. All those in turn risk being replaced by AI coding tools like Github's Copilot.

Yet despite the increasing complexity, even sophisticated systems remained scrutable to mere mortals. Take, for example, a multibillion-dollar company like Apple, which employs thousands of the world's greatest cybersecurity talent and tasks them with making sure whatever code ends up on iPhones is buttoned up nice and tight. Nevertheless, not too long ago it was still perfectly feasible for a single sufficiently motivated and talented individual to successfully find and exploit vulnerabilities in Apple's library code just by tediously working out of his living room.

Think of increased abstraction in programming as a gain in altitude, and AI coding tools are the yoke pull that will bring us escape velocity. The core issue here is that any human operator looking below will increasingly lose the ability to comprehend anything within the landscape their gaze happens to rest upon. In contrast, AI can swallow up and understand entire rivers of code in a single gulp, effortlessly highlighting and patching vulnerabilities as it glides through the air. In the same amount of time, a human operator can barely kick a panel open only to then find themselves staring befuddled at the vast oceans of spaghetti code below them.

There's a semi-plausible scenario in the far future where technology becomes so unimaginably complex that only Tech-Priests endowed with the proper religious rituals can meaningfully operate machinery. Setting aside that grimdark possibility and focusing just on the human risk aspect for now, increased abstraction isn't actually too dire of a problem. In the same way that tech companies and teenage hackers waged an arms race over finding and exploiting vulnerabilities, the race will continue except the entry price will require a coding BonziBuddy. Code that is not washed clean of vulnerabilities by an AI check will be hopelessly torn apart in the wild by malicious roving bots sniffing for exploits.

Until everyone finds themselves on equal footing where defensive AI is broadly distributed, the transition period will be particularly dangerous for anyone even slightly lagging behind. But because AI can be used to find exploits before release, Schneier believes this dynamic will ultimately result in a world that favors the defense, where software vulnerabilities eventually become a thing of the past. The arms race will continue, except it will be relegated to a clash of titans between adversarial governments and large corporations bludgeoning each other with impossibly large AI systems. I might end up eating my words eventually, but the dynamics described here seem unlikely to afford rogue criminal enterprises the ability to have both access to whatever the cutting-edge AI code sniffers are and the enormous resource footprint required to operate them.

So how about something more fun, like politics! Schneier and Nathan E. Sanders wrote an NYT op-ed recently that was hyperbolically titled How ChatGPT Hijacks Democracy. I largely agree with Jesse Singal's response in that many of the concerns raised easily appear overblown when you realize they're describing already existing phenomena:

There's also a fatalism lurking within this argument that doesn't make sense. As Sanders and Schneier note further up in their piece, computers (assisted by humans) have long been able to generate huge amounts of comments for... well, any online system that accepts comments. As they also note, we have adapted to this new reality. These days, even folks who are barely online know what spam is.

Adaptability is the key point here. There is a tediously common cycle of hand-wringing over whatever is the latest deepfake technology advance, and how it has the potential to obliterate our capacity to discern truth from fiction. This just has not happened. We've had photograph manipulation literally since the invention of the medium; we have been living with a cinematic industry capable of rendering whatever our minds can conjure with unassailable fidelity; and yet, we're still here. Anyone right now can trivially fake whatever text messages they want, but for some reason this has not become any sort of scourge. It's by no means perfect, but nevertheless, there is something remarkably praiseworthy about humanity's ability to sustain and develop properly calibrated skepticism about the changing world we inhabit.

What also helps is that, at least at present, the state of astroturf propaganda is pathetic. Schneier cites an example of about 250,000 tweets repeating the same pro-Saudi slogan verbatim after the 2018 murder of the journalist Jamal Khashoggi. Perhaps the most concerted effort in this arena is what is colloquially known as Russiagate. Russia did indeed try to spread deliberate misinformation in the 2016 election, but the effect (if any) was too miniscule to have any meaningful impact on any electoral outcome, MSNBC headlines notwithstanding. The lack of results is despite the fact that Russia's Internet Research Agency, which was responsible for the scheme, had $1.25 million to spend every month and employed hundreds of "specialists."

But let's steelman the concern. Whereas Russia had to rely on flesh and blood humans to generate fake social media accounts, AI can be used to drastically expand the scope of possibilities. Beyond reducing the operating cost to near-zero, entire ecosystems of fake users can be conjured out of thin air, along with detailed biographies, unique distinguishing characteristics, and specialization backgrounds. Entire libraries of fabricated bibliographies can similarly be summoned and seeded throughout the internet. Google's system for detecting fraudulent website traffic was calibrated based on the assumption that a majority of users were human. How would we know what's real and what isn't if the swamp gets too crowded? Humans also rely on heuristics ("many people are saying") to make sense of information overload, so will this new AI paradigm augur an age of epistemic learned helplessness?

Eh, doubtful. Propaganda created with the resources and legal immunity of a government is the only area I might have concerns over. But consistent with the notion of the big lie, the false ideas that spread the farthest appear deliberately made to be as bombastic and outlandish as possible. Something false and banal is not interesting enough to care about, but something false and crazy spreads because it selects for gullibility among the populace (see QAnon). I can't predict the future, but the concerns raised here do not seem materially different from similar previous panics that turned out to be duds. Humans' persistent adaptability in processing information appears to be so consistent that it might as well be an axiom.

And finally, scamming. Hoo boy, are people fucked. There's nothing new about swindlers. The classic Nigerian prince email scam was just a repackaged version of similar scams from the sixteenth century. The awkward broken English used in these emails obscures just how labor-intensive it can be to run a 419 scam enterprise from a Nigerian cybercafe. Scammers can expect maybe a handful of initial responses from sending hundreds of emails. The patently fanciful circumstances described by these fictitious princes follow a similar theme for conspiracies: The goal is to select for gullibility.

But even after a mark is hooked, the scammer has to invest a lot of time and finesse to close the deal, and the immense gulf in wealth between your typical Nigerian scammer and your typical American victim is what made the atrociously low success rates worthwhile. The New Yorker article The Perfect Mark is a highly recommended and deeply frustrating read, outlining in excruciating detail how one psychotherapist in Massachusetts lost more than $600,000 and was sentenced to prison.

This scam would not have been as prevalent had there not existed a country brimming with English-speaking people with internet access and living in poverty. Can you think of anything else with internet access that can speak infinite English? Get ready for Nigerian Prince Bot 4000.

Unlike the cybersecurity issue, where large institutions have the capabilities and the incentive to shore up defenses, it's not obvious how individuals targeted by confidence tricks can be protected. Besides putting them in a rubber room, of course. No matter how tightly you encrypt the login credentials of someone's bank account, you will always need to give them some way to access their own account, and this means that social engineering will always remain the prime vulnerability in a system. Best of luck, everyone.

Anyways, AI sounds scary! Especially when wielded by bad people. On the flipside of things, I am excited about all the neat video games we're going to get as AI tools continue to trivialize asset creation and coding generation. That's pretty cool, at least. 🤖


[Originally posted on Singal-Minded back in October & now unlocked. Sorry for telling the normies about this place!]

It's an homage to a philosophical pitfall, but the name is also thematically fitting. It conjures up a besieged underdog, a den of miscreants, an isolated outpost, or just immovable stubbornness.

It's The Motte.

This is an obscure internet community wedded to a kinky aspiration --- that it is possible to have enlightening civil conversations about desperately contentious topics. Previously a subreddit, it finally made the exodus to its own independent space following mounting problems with Reddit's increasingly arbitrary and censorious content policies. The Motte is meant as the proverbial gun-free zone of internet discussion. So long as everyone follows strict rules and decorum, they may talk and argue about anything. At its best, it is the platonic ideal of the coffeehouse salon. This tiny corner of the internet has had an outsize influence on my life and yet despite that, I've always struggled to describe it to others succinctly.

In order to do so, I'll have to explain medieval fortification history briefly. Picture a stone tower, sitting pretty on a hill. It may be cramped and unpleasant, but it's safe. Likely impenetrable to any invasion. This is the motte. One cannot live on a diet of stone fortification alone, and so immediately surrounding the motte is the bailey --- the enclosed village serving as the economic engine for the entire enterprise. The bailey's comparative sprawl is what makes it more desirable to live in, and also what makes it more vulnerable, as it can be feasibly fortified only by a dug ditch or wooden palisade. So you hang out in the bailey as much as possible until a marauding band of soldiers threatens your entire existence and forces your retreat up the hill, into the motte. Bailey in the streets, motte in the event of cataclysmic danger, as the kids might say.

We don't have a lot of real-life mottes and baileys these days, but we do have a rhetorical analogy that is very useful: the motte-and-bailey fallacy. Someone bold enough to assert something as inane as "astrology is real" (bailey) might, when challenged, retreat to the infinitely more anodyne "all I meant by astrology being real is that natural forces like celestial bodies might have an effect on human lives" (motte), and who can argue against that? Once the tarot-skeptical challenger gives up on charging up the rampart, the challenged can peek from behind the gate and slink back to the spacious comforts of the bailey, free to expound on the impact of Mercury in retrograde or whatever without any pesky interruptions. Once you recognize this sleazy bait-and-switch, you'll spot it everywhere around you. Other examples are motte: common-sense gun control; bailey: Ban all civilian firearm ownership. Or motte: addressing climate change; bailey: Voluntary Human Extinction Movement. On and on.

Back to the history of my favorite online community: In the beginning, before The Motte was The Motte, they were the Rationalists (a.k.a. "rat-sphere" or just "rats"). These are a bunch of painfully earnest and lovable nerds unusually mindful about good epistemological hygiene.

Across their odyssey, they gather around various Schelling points, with the blog-cum-encyclopedia LessWrong as one of their most prominent congregation points. Whatever hurdles to logical reasoning (confirmation bias, availability heuristic, or motivated reasoning, to name very few) that you can come up with are guaranteed already to be extensively cataloged within its exquisitely maintained database.

It is understandably suspicious when a group names itself after what is presumed to be a universally lauded value, but you can see evidence of this commitment in practice. My favorite vignette to illustrate the humility and intellectual curiosity of the rat-sphere happened when I attended my first meetup and overheard a conversation that started with "Okay, let's assume that ISIS is correct... " with the audience just calmly nodding along, listening intently.

Even if you don't know about the rats, you may have heard of the psychiatrist and writer Scott Alexander. His blog remains a popular caravanserai stop within the rat-sphere. While his writing output is prodigious in both volume of text and topical scope (everything from mythological fiction of Zeus evading a celestial amount of child-support obligations to a literature review of antidepressant medication), what consistently drew the most attention and heat to his platform were his essays on culture war topics, perennial classics like Meditations on Moloch or I Can Tolerate Anything Except The Outgroup to name a select few.

Culture wars are best understood as issues that are generally materially irrelevant, yet are viciously fought over as proxy skirmishes in a battle over society's values. (Consider how much ink is spilled over drag queen story hours.) But something can be both materially irrelevant and fun. And inevitably, like flies to shit, people were most drawn to the juiciest of topics --- the proverbial manure furnaces that generated the brightest of flames. Scott *tried *to keep all this energy contained to a dedicated Culture War Thread on his blog's subreddit, but the problem was that it worked *too well *in encouraging unusually intelligent and cogent articulations of "unthinkable" positions. In part because Scott has made some enemies over the years, and said enemies have eagerly sought opportunities to demonize him as his star has risen, the internet peanut gallery frequently (and disingenuously) attributed the most controversial opinions on the subreddit to Scott himself. This in turn directed ire at the host for "platforming" the miasma. And so in early 2019, Scott emancipated the thread, and a crew of volunteers forked the idea away onto its own subreddit and beatified it with its new name: r/TheMotte.

Because the space was rat-adjacent from the beginning, it had a solid basis to succeed as an oasis of calm. Even with that advantage, the challenge of building a healthy community almost from scratch should not be underestimated. Props to the moderators, who kept the peace with both negative and positive reinforcement. As you might expect in a community dedicated to civil discussion, you could get banned for being unnecessarily antagonistic or for using the subreddit to wage culture war rather than discuss it.

But equally important was the positive reinforcement part of the equation. If anyone's post was particularly good, you would "report" it to the mods as "Actually A Quality Contribution," or AAQC. The mods collected the AAQC and regularly posted roundups. Consider for a moment and appreciate how radical a departure this is from the norm. The internet has developed well-worn pathways from the constant barrage of wildebeest stampeding to the latest outrage groundswell, famishing to feast on its pulped remains. This machine increasingly resembles one purpose-built for injecting the worst, most negative content into our brains every second of every day. And instead here were these dorks, congregating specifically to talk about the most emotionally heated topics du jour, handing out certificates of appreciation and affirmation.

The AAQC roundups were a crucial component of the community, particularly when they unearthed hidden gems that would otherwise have remained buried. Reddit's down/upvote feature is often ab/used as a proxy for dis/agreement (leave it to the rats to create two-factor voting for internet comments), but the mods made sure to highlight thought-provoking posts especially when they disagreed with them.

Part of the draw was just how unassuming it all was. A small handful of people who wandered in happened to already have well-established writing platforms built elsewhere. But by and large, this was an amateur convention attended by relative nobodies. And yet some of my favorite writing ever was posted exclusively in this remote frontier of Reddit.

The highlights are numerous. How about a grocery store security guard talking about his crisis of faith about modern society that happened during a shift? Or the post that forever changed how I viewed Alex Jones by reframing his unusual way of ranting through the prism of epic poetry tradition? Or the philosophy behind The Motte, where Arthur Chu is cast as the villain? Or how people talk past each other when using the word "capitalism"? Or an extended travelogue of Hawaii's unusual racial dynamics? Or this hypothetical conversation between a barbarian and a 7-11 clerk? Or how Warhammer 40k is a superior franchise to Star Wars thanks in part to higher verisimilitude in its depiction of space fascism? Or this effortlessly poetic meditation on Trump's omnipresence? Or an ethnography of the effectiveness of rifle fire across cultures? Or how the movie Fantastic Mr. Fox straddles the trad/furry divide? Or this catalog of challenges facing a Portland police officer? Or this dispatch from an overwhelmed doctor working during India's horrific second COVID-19 wave? Or a technical warning about Apple's ability to spy on its customers? Or why the major scale in music has such broad multicultural appeal? Or a man brought to tears by overwhelming gratitude while shopping at Walmart? Or how the decline of Western civilization can be reflected in the trajectory of a children's cartoon series? Or how RPGs solved a problem by declaring some fantasy races to be inherently evil only to create another issue? Or how about the potential nobility of --- get this --- indiscriminate retributive homicide from the standpoint of a Chinese military officer going on a shooting rampage after his wife died of a forced abortion?

The structure of the community was such that it gained a sort of natural immunity to trolls. The community was primed to take the arguments trolls made seriously, and this meant drafting intimidating walls of text in earnest. And that wouldn't be the end of it, because you could reliably expect the community to obsess and mull over that same topic for weeks on end, churning out thousands of words more in the process. Most bad-faith actors find it impossible to keep up the charade for that long, and it's just Not Fun™ when a troll's potential victim reacts by obliviously submitting immaculately written essays in reply. Consider an example of the type of discourse that gets prompted by something as wild-eyed as the question of "when is it ethical to murder public officials?". The goal of trolling is to incite immediate, reactive anger, and it must've been dispiriting to enter the space solely to cause trouble, and to slink out having encouraged more AAQCs instead. Anyone dumb enough to try a drive-by bait-and-snark quickly found themselves exhausted and overwhelmed.

Places that explicitly herald themselves as an offshoot to the mainstream quickly gain a reputation as a cesspit of right-wing extremists. Setting aside the question of overall political dominance, it remains true that major institutions (media, finance, tech, etc.) are overwhelmingly staffed by liberal-leaning individuals. Conservatives who feel hounded by the major institutions can opt to carve out their own spaces, and yet nearly every attempt to create the "conservative" alternative to social media giants ends up a toxic waste dump (See Voat, Parler, Gab, etc.).

Scott Alexander described this best when he wrote:

The moral of the story is: if you're against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

So it's unsurprising that people have criticized The Motte for being a den of right-wing rogues. For what it's worth, a survey of the community found the modal user to be a libertarian Hillary Clinton voter. But homogeneous thinking is explicitly not the goal here, and the point of the entire enterprise is to have your ideas challenged. Sterilized gruel is the antithesis of critical thinking and the reason why we need places like The Motte.

That's the backstory, and here's how it impacted me personally.

I've always been insatiably curious. But communicating in writing was a momentous struggle for me. Although I coasted through college, writing assignments were virtually the only source of anxiety for me. I once described the writing process as "struggling to take a painful shit." Eking out anything remotely worthwhile was a cataclysmic struggle. I'd stare at a blank page with dread, draft voluminous paragraphs, find myself meandering into gratuitous prose, delete passages until I forgot the point I was making, and then sift through the remaining dessicated husk wondering why anyone would give a fuck. Years ago, before I found my groove in my current job as a public defender, and outside the veil of school-mandated writing, I had ideations of making a living as a writer. A few more of the above-described painful shit sessions conclusively disavowed me of that delusion.

In contrast, though, talking about ideas came naturally to me very early. I was always indefatigable and relentless and confrontational and (with all due humility) easily ran laps around people who had the misfortune of engaging in discussion with me in real life. Few were surprised that I became a lawyer.

My frustrations with writing never sapped my passion for reading, but consuming others' work left me feeling forlorn about my own inadequacy. It was hard for me to admire prominent writers without also feeling pangs of envy. But browsing The Motte only sharpened my frustration because these weren't big-name writers churning out incredible posts --- they were random nobodies. So when it first started, I mostly lurked and did not write much, because I did not believe I had the requisite caliber to contribute anything worthwhile.

I changed my mind about contributing after getting drunk with a friend in the backyard of a bar while a Bernese dog eyed our uneaten sandwiches. My friend (a bona fide socialist) and I got into a passionate but civil discussion about the ideal contours of free speech. The specific disagreement doesn't matter, because that afternoon reminded me how invigorated I feel by in-person discussions. It dawned on me how I could properly contribute to The Motte. A few weeks later I memorialized my pseudonym with a fresh new account, and my immediate goal was to start a podcast. Naturally, it was called The Bailey.

Our release schedule may not be the most reliable, but we have put out 29 episodes so far (for the record, that's more than the hilarious and informative legal podcast ALAB). In between recording episodes, I wrote posts on The Motte, almost as an afterthought. But the point here is that I wanted to start a podcast because I thought my writing sucked.

I always knew I could anticipate some vociferous pushback at The Motte. The pushback was crucial, as it was the whetstone to my rhetoric. I knew that if I were going to do something as foolish as post on The Motte, I had to be loaded for bear. I'd sling the grenade by hitting "post," but the notifications that followed promised some reciprocated shrapnel. All the better.

Posting on a dusty corner of Reddit about some culture war bullshit was obviously very low-stakes, but then a very curious thing happened: People noticed my stuff. I'm only slightly embarrassed to admit how gleeful I was telling my girlfriend that something I wrote was recognized as an AAQC and included in the roundup. And it kept happening, again and again. Eventually I was picked to be one of the moderators (joining veterans like podcast apprentice Tracing Woodgrains) in a process that mirrored how the Venetian Doge was selected. I realized over time just how much of a gargantuan amount of writing I had absent-mindedly accumulated over the years just by posting on The Motte, and so when I started my own Substack almost a year ago, its only purpose was to find a home for that compendium.

I kept writing there for years, obliviously using its space to workshop my writing craft and barely noticing. It wasn't until some of my writing escaped into the wild earlier this year (assisted by a certain sentient fox) and received recognition by the powers that be that I realized how grateful I am for the precious space cultivated here.

I could not have accomplished any of this without The Motte. I owe that space --- especially the jerks who deigned to disagree with me --- so much.


Listen on iTunes, Stitcher, Spotify, SoundCloud, Pocket Casts, Google Podcasts, Podcast Addict, and RSS.

In this episode, we discuss porn.

Participants: Yassine, Interversity, Neophos, Xantos.


E016: The Banality of Catgirls (The Bailey)

Is Internet Pornography Causing Sexual Dysfunctions? A Review with Clinical Reports (Behavioral Sciences)

How Pornography Can Ruin Your Sex Life (Mark Manson)

Does too much pornography numb us to sexual pleasure? (Aeon Magazine)

The great porn experiment (TEDx)

Hikikomori (Wikipedia)

The Effects Of Too Much Porn: "He's Just Not That Into Anyone" (The Last Psychiatrist)

Hard Core (The Atlantic)

Recorded 2022-12-18 | Uploaded 2023-01-12

I'm curious about not just what your favorite post is, but also what you think is the GOAT, or perhaps what you think is most illustrative and representative of this space (e.g. what would you show someone to get them intrigued). Please limit your post to only ONE pick and briefly explain why you chose it. This can be from anywhere within the Motte's history thus far, and r/TheThread is a good place to check in case you're having trouble finding something. Asking for a friend.


You know it's really me because who else would care about RSS. Although Reddit was originally built with explicit RSS support the nested nature of the weekly culture war thread required a slight bit of jury-rigging to show only top-level comments. So the RSS URL for the last thread looked like this: https://old.reddit.com/r/TheMotte/comments/wulqxp.rss?depth=1

I tried adding the culture war thread from here into Feedly but it doesn't seem to recognize the format, and instead prompts me to use a paywalled feature to build custom RSS feeds. Can the rdrama code base support RSS?