The Selfish Gene remains one of my favorite books of all time and although published in 1976 it remains a compelling and insightful introduction to the brute mechanics of natural selection. Richard Dawkins later acknowledged that his book's title may give a misleading impression of its thesis, erroneously ascribing conscious motivations or agentic properties to non-sentient strands of DNA. The core argument is dreadfully simple: genes (as opposed to organisms or species) are the primary unit of natural selection, and if you leave the primordial soup brewing for a while, the only genes that can remain are the ones with a higher proclivity towards replication than their neighbors. "Selfish" genes therefore are not cunning strategists or followers of some manifest destiny, but rather simply the accidental consequence of natural selection favoring their propagation. Nothing more.
Dawkins is responsible for coining the word 'meme' in the book to describe how the same principles behind gene replication can apply to ideas replicating. I thought about this when I read WoodFromEden's post about the origin of patriarchy.[1] Their explanation for why male dominance persisted historically for so long is elegantly tidy:
Men make war. Or rather, groups of men make war. The groups that were good at making war remained. The groups that were less good at making war perished. That way, human history is a history of successful male military cooperation. Groups with weak male bonding were defeated by groups where men cooperated better.
Here too, there is no dirigible trajectory mapped out ahead of time. Cultural values which valorize physical male violence and facilitate its coordination at scale will become the dominant paradigm purely as a result of the circumstances' ruthless logic. Any deviation from this set of values would lead your tribe towards extinction, which accidentally also meant your bards wouldn't be around to write songs and poems extolling the virtues of sex equality. At least not until there have been an extensive change in circumstance.
This "security dilemma" may have been borne out of petty squabbles over hunting grounds in the Serengeti but its ramifications persisted throughout history. Military service today may be seen as a low-status and distasteful profession — quite literally grunt work — but it used to be venerated deeply as a path to honor and a cornerstone of civic duty. This philosophy is epitomized by the recurring and central portrayal of military men in stories from a long time ago (Homeric heroes of ancient Greece, Genghis Khan, Jedi knights, etc.), their deeds forming the backbone of societal narratives and cultural mythologies.
The historian Bret Deveraux analyzed the grand strategy video game Europa Universalis 4 to illustrate the war-hungry reality of the late medieval period:
Military power requires revenue and manpower (along with staying technologically competitive) and both come from the same source: the land. While a player can develop existing provinces, taking land in war is far cheaper and faster. The game represents this through both developing old land and seizing new land requiring similar resources [but compared to incorporating newly conquered land, development is about 4x as expensive while providing only marginal improvements]. That may seem like the developer has placed their thumb a bit unfairly on the scale, but, as Azar Gat notes in War in Human Civilization (2006) for pre-industrial societies that is a historically correct thumb on the scale. Until the industrial revolution, nearly all of the energy used in production came out of agriculture one way or another; improves in irrigation, tax collection and farming methods might improve yields, but never nearly so much as adding more land. Consequently, as Gat puts it, returns to capital investment (hitting the development button) were always wildly inferior to returns to successful warfare that resulted in conquest.
For most of history, living the good life meant killing people and taking their shit. The men of martial prowess — those exceptionally good at killing people and taking their shit — were appropriately exalted and deified for the base survival and material gain they were able to provide to their community. Fundamental to this community's well-being is a male's ability to commit acts of horrific physical violence in his individual capacity and to coordinate others to do the same (this too with violence if necessary). Any folklore or morality code which facilitated this core mission will replicate, spread, and become enshrined as humanity's unquestioned zeitgeist. Not because it's the "right" thing to do, but solely because no pacifist egalitarian civilization could have possibly survived to say otherwise.
I've written before about slavery, along a similar vein of Devereaux-inspired historical analysis. Although subject nowadays to some quixotic revisionism about why it existed, there is nothing at all remarkable about slavery's near-universal historical pervasiveness. The only justification anyone ever needed to press another into bondage is the universal desire to have someone else do all the work. Any mythology pasted on top (including institutionalized racism) was always just set dressing. When industrialization made slavery increasingly politically and economically untenable, the moral and legal consensus conveniently caught up.
Consider the chasm with how much material circumstances changed. Promises of milk and honey used to serve as the bounty of divine compacts, but today I can performatively buy entire vats of the stuff and barely notice the financial hit. Cheap and abundant electricity is part of the reason I have trivial access to luxuries ancient royalty could only dream about. Buckminster Fuller coined the term energy slave as a way to contextualize energy consumption by calculating the equivalent kilowatt-hours a healthy human could provide through labor. It's a crude equivalence for sure but with some basic assumptions [2] we can calculate the average American relies on the "labor" of about 150 energy slaves. Well what do you know, that happens to be around how many slaves George Washington owned.[3]
The most fascinating book I've never read is The Secret Of Our Success which essentially argues humans succeeded because we're uniquely adept at making shit up — social conventions, cultural norms, religious mythology, etc. — which happens to be directionally useful.
One of the reasons stone tool technology languished for millions of years is likely a result of the brute limitations of a then-human's cognitive capacity. It took about 3 million years of evolution for the human brain to triple in size; a pace too glacial to contemplate but still remarkably fast for natural selection. By contrast, the pace of cultural memetic evolution is not constrained by the corporeal cycle of birth and death. Once the human brain got swole enough, the jet fuel that really powered the next few thousand years of technological advancement was almost entirely a result of cultural advancement. Our ability to create viral memes, in other words.
I'm an atheist who believes religion is a fiction, but I happily recognize it as a materially useful fiction. The Dunbar limit normally would make us dreadfully wary of any interactions with Person No. 151, a hurdle which would have otherwise foreclosed the already impossibly long alloy trade routes necessary to start the bronze age. BUT if you make some shit up about how Person No. 151 is actually totally cool to trade with because they're of the same religion or K-pop fandom or whatever, the cultural fiction is soothing enough for your flighty lizardbrain to let its guard down. Keep this up long enough and maybe pencils can exist.
Our mind's rational capacity to observe patterns, question assumptions, and test hypotheses provides us with an envious advantage in mastering the physical world with everything from tracking game to optimizing steam turbines. But paradoxically as Gurwinder notes in his highly-recommended essay Why Smart People Believe Stupid Things, the very same intelligence can become an effective source of delusion:
As a case in point, human intelligence evolved less as a tool for pursuing objective truth than as a tool for pursuing personal well-being, tribal belonging, social status, and sex, and this often required the adoption of what I call "Fashionably Irrational Beliefs" (FIBs), which the brain has come to excel at. Since we're a social species, it is intelligent for us to convince ourselves of irrational beliefs if holding those beliefs increases our status and well-being.
Unlike George Washington, I don't support slavery (please clap). But also unlike Washington, I conveniently happen to benefit from a dense tapestry of infrastructure and tendinous globe-spanning supply chains affording me near-immediate satisfaction of my most trivial of whims. Based on the evident historical record, without the environmentally deleterious bounty fossil fuels facilitated, most of us would be conjuring up creatively compelling excuses for why forcing your neighbor to work for free is the Moral thing to do. Gurwinder cites exactly such an example with the 19th century physician Samuel A. Cartwright:
A strong believer in slavery, he used his learning to avoid the clear and simple realization that slaves who tried to escape didn't want to be slaves, and instead diagnosed them as suffering from a mental disorder he called drapetomania, which could be remedied by "whipping the devil" out of them. It's an explanation so idiotic only an intellectual could think of it.
The cynical ramifications of my argument might be impossible to avoid completely. Perhaps acknowledging how much our technological milieu guides our moral spirit could beckon us to intensify our agentic nature. To the extent the field of evolutionary psychology can be deployed to shed light on past and present mysteries, perhaps it can shed insight into the future too?
But ultimately, how scary is it to know your deeply held convictions are subject to materialistic opportunism?
[1] As Scott Alexander noted: "If you're allergic to the word "patriarchy", reframe it as the anthropological question of why men were more powerful than women in societies between the Bronze and Industrial Age technology levels."
[2] The average per capita consumption in the US is 300 million BTUs. A human can sustain 75 watts of work over 8 hours, which translates to 2,047 BTUs of energy per day. If we generously also give our energy slaves the weekends off, that's 260 days times 2,047 BTUs, or 532,220 BTUs of energy per year. I very likely fucked this up but I stopped caring hours ago.
[3] Another crude equivalence, but Washington's net worth in today's dollars is around $700 million, far outstripping every other US president until Trump showed up.
[This version has embedded images]
A long time ago, some primitive apes got addicted to rocks.
The earliest stone tools were crude bastards, made by smashing large river pebbles together and calling it a day.
Stone choppers like the one above took the prehistoric neighborhood by storm almost 3 million years ago. However dull the tools themselves may have been, this was the cutting-edge technology for literally more than a million years, a timescale I have no capacity of comprehending. Not until around 1.7 million years ago (again, no idea what this means) that someone got the bright idea of chipping away both sides of a rock. You can see what the (tedious) process looks like.
The end result is the unassuming tear-drop shaped hand axe, by far the longest used tool in human history. There are no accessories here with the hand axe, its name comes from the fact that you use it by holding it directly with your hands.
On top of being tedious and painful to make, you can imagine that it's not terribly comfortable to hold while using. Hand axes also have to be somewhat bulky because of the necessity of combining the sharp useful end with the blunt holding end. But what if --- stay with me for a second --- instead of holding the thing directly with our pathetic squishy hands, we held something that "handled" the tool for us? It took humans about another million years to discover hafting, with the earliest examples from around 500,000 years ago but the technique didn't really find its stride until the microlith era of stone tools around 35,000 years ago.
Then humans found metal.
"Technological advance is an inherently iterative process. One does not simply take sand from the beach and produce a Dataprobe. We use crude tools to fashion better tools, and then our better tools to fashion more precise tools, and so on. Each minor refinement is a step in the process, and all of the steps must be taken."
-- Chairman Sheng-ji Yang, "Looking God in the Eye"
The historian Bret Devereaux has an excellent and highly-recommended series on the history of iron. The popular depiction of iron being a rare commodity (typified within medieval and fantasy genre) obscures some of the reality. As a material, iron is extremely abundant --- the fourth most common element in the Earth's crust, making up 5% of its mass. The hurdle with iron wasn't finding it but rather getting it out of the ground and into a useable form. It required a lot of dead trees and broken shins. One of the illustrations Devereaux cited is from 1556, and shows how workers wore shin protection as they crushed the ore into useable chunks.
Think about how many mangled limbs had to accumulate before medieval OSHA cared enough about this hazard. After the ore is dug out of the ground, the next hurdle was figuring out how to reach the high temperatures needed for processing. Because of how finicky iron is about absorbing too much carbon, the only feasible avenue was charcoal, which is made from wood, which is cut from many many trees. As Devereaux notes:
To put that in some perspective, a Roman legion (roughly 5,000 men) in the Late Republic might have carried into battle around 44,000kg (c. 48.5 tons) of iron -- not counting pots, fittings, picks, shovels and other tools we know they used. That iron equipment in turn might represent the mining of around 541,200kg (c. 600 tons) of ore, smelted with 642,400kg (c. 710 tons) of charcoal, made from 4,620,000kg (c. 5,100 tons) of wood. Cutting the wood and making the charcoal alone, from our figures above, might represent something like (I am assuming our charcoal-burners are working in teams) 80,000 man-days of labor. For one legion.
To understate it, much has changed since. A stainless steel spoon today is a trivially manufactured artifact. But just the material from that spoon would have represented thousands of times its weight in stone and tree, all excavated by hand. I think about what this spoon, held in the palm of my hand, would have previously cost in terms of human toil and crushed limbs.
This post is about AI.
I feel like I'm holding a hand axe right now, while everyone around me is revving up their chainsaws. I feel like I'm a peasant awestruck at the intricacies of a steel spoon, unaware of its bargain bin progeny.
It's difficult, and exhausting, to keep up with the pace of AI developments. I also question my ability to make any sort of concrete or realistic predictions in this field, so I'll try to keep it semi-grounded in the present.
What already seems evident is that, even if we assume a complete halt to any further developments, content creation is already utterly trivialized. Do you want a picture of a cat riding a unicycle while smoking a hookah? Here's 50. Do you want those same drawings but done as if Picasso was tripping out on LSD? Done. Do you want the script from an 80-episode television series involving these psychedelic Picasso unicycle cats as they work to solve a murder mystery on a cruise ship in a black hole? And you want each cat voiced by a different rap artist from Kanye West to DMX? Why not also make it a choose-your-own adventure series controlled by each viewer? Sure, whatever, done. Some of these require a little work to stitch together, but you can have it all.
Part of where my feelings are settling are a bizarre mix of trepidation, ennui, fatigue, and...excitement? I'm not the only one to ever experience mild frustration that a given movie, TV show, book, video game, etc. wasn't exactly just right, and if only the creators changed this one thing that would've been so much better.
I encounter this feeling constantly with video games and for that same reason I tend to gravitate towards extensively modifying big-budget video games to my liking with mods. For a period of time, I definitely sunk in more hours finding, installing, and configuring Skyrim mods than actually playing the game itself. This was only possible because other people were insane enough to pop the hood open and get their hands dirty. If I wanted cold weather survival elements added to Skyrim, I was lucky enough that someone else had the gumption to analyze the game files, draft up pseudo-scripts, and collect custom-made assets into a coherent package that actually worked.
I also appreciate the estorism of open-source oeuvres made entirely by coding hobbyists, like the suburban apocalypse simulator Cataclysm: Dark Days Ahead. Cataclysm is a jury-rigged amalgamation, cobbled together over the years by dozens of drive-by developers. Some aspects of the game are painfully undercooked, such as the lack of any real ending, while others are pathologically overdeveloped, such as the ridiculously intricate vehicle physics system which manages to accurately simulate drag resistance in a game where no one will ever the difference. The only reason there's any progress made on these projects is because there are enough enthusiasts roaming around with actual coding talent, but they'll only chase after their own whims and then move on. Anyone else with ideas either has to convince one of these sensei to take up their cause, or drudge through hours of coding tutorials on YouTube to ever stand a chance. Lots of fields stay fallow then.
Outside of play and in the realm of work, much of my time is chasing after tedium. A few tasks manage to reliably trigger my procrastination reflex with the main one being legal research and writing. Let's say I'm trying to have incriminating statements or evidence suppressed. If the scenario is even slightly interesting, I am not likely to find a case precedent within my jurisdiction that is perfectly on point. Instead, I dump a few search terms into a legal database and then spend hours with dozens of tabs open, dutifully reviewing each hoping I can find enough adjacent precedent to triangulate into an answer into my own case. Judicial opinions are almost never written in a uniform manner, so I often find myself realizing a given case is worthless only after already wasting several minutes reviewing it. After all that research, I have to synthesize it into something legally accurate without boring the overworked judge to death.
It's all tedious boring work. It's also a perfect use-case scenario for chatGPT because it would be trivial for me to just ask it to quickly find and summarize whatever is analogous to what I'm looking for, then write something custom-tailored. The day that Westlaw incorporates chatGPT is the day that Thomson Reuters will become a pseudo-branch of the Treasury Department, for its ability to just print money from the legal profession. To be clear, my concern here is not job loss. I imagine that with greater productivity comes greater expectations, especially with AI helpers at our side.
I wonder, why bother with any of it now?
On the consumption side, whatever game I choose to play now will only get way better in a few months as I'm able to trivially customize it to my mind's whim. Same with whatever television show, or movie, or book. Or existence.
On the production side there's so much more I want to write but I also wonder, why bother writing anything if it's just going to be swallowed up whole and incorporated into the labyrinthian halls of a Borges infinite library. Realistically the only effect this post will ultimately leave upon the world is a faint whisper of an errant memory. The rest will either be carved up into individual tokens or buried under a figurative mountain of indecipherable pages. I see the entire corpus of mankind's creative output as a tiny ship, a gnat really, about to swallowed by a towering ocean wave. Part of me just wants to sit and wait for the flood.
I wrote this entire post without chatGPT, to prove something I guess. It took hours. I had to look up some new concepts, read enough to understand them, revisit old essays I read, and review them to refresh my memory. After all that, I had to use my dumb fingers to tap buttons on my dumb keyboard, over and over again.
I'm the idiot holding the hand axe. I'm the imbecile mangling my shins with rock debris. Why bother?
Listen on iTunes, Stitcher, Spotify, Pocket Casts, Google Podcasts, Podcast Addict, and RSS.
In this episode, we discuss gayness.
Participants: Yassine, TracingWoodgrains, Sultan, Shakesneer.
Links:
Ezra Klein Interviews Dan Savage (New York Times)
Stonewall: A Butch Too Far (An Historian Goes to the Movies)
Mattachine Society (Wikipedia)
3 Differences Between the Terms 'Gay' and 'Queer' (Everyday Feminism)
Exploring HIV Transmission Rates (Healthline)
Boys Beware (PBS)
Recorded 2023-02-02 | Uploaded 2023-02-28
I'll be honest: I used to think talk of AI risk was so boring that I literally banned the topic at every party I hosted. The discourse generally focused on existential risks so hopelessly detached from any semblance of human scale that I couldn't be bothered to give a shit. I played the Universal Paperclips game and understood what a cataclysmic extinction scenario would sort of look like, but what the fuck was I supposed to do about it now? It was either too far into the future for me to worry about it, or the singularity was already imminent and inevitable. Moreover, the solution usually bandied about was to ensure AI is obedient ("aligned") to human commands. It's a quaint idea, but given how awful humans can be, this is just switching one problem for another.
So if we set aside the grimdark sci-fi scenarios for the moment, what are some near-term risks of humans using AI for evil? I can think of three possibilities where AI can be leveraged as a force multiplier by bad (human) actors: hacking, misinformation, and scamming.
(I initially was under the deluded impression that I chanced upon a novel insight, but in researching this topic, I realized that famed security researcher Bruce Schneier already wrote about basically the same subject way back in fucking April 2021 [what a jerk!] with his paper The Coming AI Hackers. Also note that I'm roaming outside my usual realm of expertise and hella speculating. Definitely do point out anything I may have gotten wrong, and definitely don't do anything as idiotic as make investment decisions based on what I've written here. That would be so fucking dumb.)
Computers are given instructions through the very simple language of binary: on and off, ones and zeroes. The original method of "talking" to computers was a punch card, which had (at least in theory) an unambiguous precision to its instructions: punch or nah, on or off, one or zero. Punch cards were intimate, artisanal, and extremely tedious to work with. In a fantastic 2017 Atlantic article titled The Coming Software Apocalypse, James Somers charts how computer programming changed over time. As early as the 1960s, software engineers were objecting to the introduction of this new-fangled "assembly language" as a replacement for punch cards. The old guard worried that replacing 10110000 01100001 on a punch card with MOV AL, 61h might result in errors or misunderstandings about what the human actually was trying to accomplish. This argument lost because the benefits of increased code abstraction were too great to pass up. Low-level languages like assembly are an ancient curiosity now, having long since been replaced by high-level languages like Python and others. All those in turn risk being replaced by AI coding tools like Github's Copilot.
Yet despite the increasing complexity, even sophisticated systems remained scrutable to mere mortals. Take, for example, a multibillion-dollar company like Apple, which employs thousands of the world's greatest cybersecurity talent and tasks them with making sure whatever code ends up on iPhones is buttoned up nice and tight. Nevertheless, not too long ago it was still perfectly feasible for a single sufficiently motivated and talented individual to successfully find and exploit vulnerabilities in Apple's library code just by tediously working out of his living room.
Think of increased abstraction in programming as a gain in altitude, and AI coding tools are the yoke pull that will bring us escape velocity. The core issue here is that any human operator looking below will increasingly lose the ability to comprehend anything within the landscape their gaze happens to rest upon. In contrast, AI can swallow up and understand entire rivers of code in a single gulp, effortlessly highlighting and patching vulnerabilities as it glides through the air. In the same amount of time, a human operator can barely kick a panel open only to then find themselves staring befuddled at the vast oceans of spaghetti code below them.
There's a semi-plausible scenario in the far future where technology becomes so unimaginably complex that only Tech-Priests endowed with the proper religious rituals can meaningfully operate machinery. Setting aside that grimdark possibility and focusing just on the human risk aspect for now, increased abstraction isn't actually too dire of a problem. In the same way that tech companies and teenage hackers waged an arms race over finding and exploiting vulnerabilities, the race will continue except the entry price will require a coding BonziBuddy. Code that is not washed clean of vulnerabilities by an AI check will be hopelessly torn apart in the wild by malicious roving bots sniffing for exploits.
Until everyone finds themselves on equal footing where defensive AI is broadly distributed, the transition period will be particularly dangerous for anyone even slightly lagging behind. But because AI can be used to find exploits before release, Schneier believes this dynamic will ultimately result in a world that favors the defense, where software vulnerabilities eventually become a thing of the past. The arms race will continue, except it will be relegated to a clash of titans between adversarial governments and large corporations bludgeoning each other with impossibly large AI systems. I might end up eating my words eventually, but the dynamics described here seem unlikely to afford rogue criminal enterprises the ability to have both access to whatever the cutting-edge AI code sniffers are and the enormous resource footprint required to operate them.
So how about something more fun, like politics! Schneier and Nathan E. Sanders wrote an NYT op-ed recently that was hyperbolically titled How ChatGPT Hijacks Democracy. I largely agree with Jesse Singal's response in that many of the concerns raised easily appear overblown when you realize they're describing already existing phenomena:
There's also a fatalism lurking within this argument that doesn't make sense. As Sanders and Schneier note further up in their piece, computers (assisted by humans) have long been able to generate huge amounts of comments for... well, any online system that accepts comments. As they also note, we have adapted to this new reality. These days, even folks who are barely online know what spam is.
Adaptability is the key point here. There is a tediously common cycle of hand-wringing over whatever is the latest deepfake technology advance, and how it has the potential to obliterate our capacity to discern truth from fiction. This just has not happened. We've had photograph manipulation literally since the invention of the medium; we have been living with a cinematic industry capable of rendering whatever our minds can conjure with unassailable fidelity; and yet, we're still here. Anyone right now can trivially fake whatever text messages they want, but for some reason this has not become any sort of scourge. It's by no means perfect, but nevertheless, there is something remarkably praiseworthy about humanity's ability to sustain and develop properly calibrated skepticism about the changing world we inhabit.
What also helps is that, at least at present, the state of astroturf propaganda is pathetic. Schneier cites an example of about 250,000 tweets repeating the same pro-Saudi slogan verbatim after the 2018 murder of the journalist Jamal Khashoggi. Perhaps the most concerted effort in this arena is what is colloquially known as Russiagate. Russia did indeed try to spread deliberate misinformation in the 2016 election, but the effect (if any) was too miniscule to have any meaningful impact on any electoral outcome, MSNBC headlines notwithstanding. The lack of results is despite the fact that Russia's Internet Research Agency, which was responsible for the scheme, had $1.25 million to spend every month and employed hundreds of "specialists."
But let's steelman the concern. Whereas Russia had to rely on flesh and blood humans to generate fake social media accounts, AI can be used to drastically expand the scope of possibilities. Beyond reducing the operating cost to near-zero, entire ecosystems of fake users can be conjured out of thin air, along with detailed biographies, unique distinguishing characteristics, and specialization backgrounds. Entire libraries of fabricated bibliographies can similarly be summoned and seeded throughout the internet. Google's system for detecting fraudulent website traffic was calibrated based on the assumption that a majority of users were human. How would we know what's real and what isn't if the swamp gets too crowded? Humans also rely on heuristics ("many people are saying") to make sense of information overload, so will this new AI paradigm augur an age of epistemic learned helplessness?
Eh, doubtful. Propaganda created with the resources and legal immunity of a government is the only area I might have concerns over. But consistent with the notion of the big lie, the false ideas that spread the farthest appear deliberately made to be as bombastic and outlandish as possible. Something false and banal is not interesting enough to care about, but something false and crazy spreads because it selects for gullibility among the populace (see QAnon). I can't predict the future, but the concerns raised here do not seem materially different from similar previous panics that turned out to be duds. Humans' persistent adaptability in processing information appears to be so consistent that it might as well be an axiom.
And finally, scamming. Hoo boy, are people fucked. There's nothing new about swindlers. The classic Nigerian prince email scam was just a repackaged version of similar scams from the sixteenth century. The awkward broken English used in these emails obscures just how labor-intensive it can be to run a 419 scam enterprise from a Nigerian cybercafe. Scammers can expect maybe a handful of initial responses from sending hundreds of emails. The patently fanciful circumstances described by these fictitious princes follow a similar theme for conspiracies: The goal is to select for gullibility.
But even after a mark is hooked, the scammer has to invest a lot of time and finesse to close the deal, and the immense gulf in wealth between your typical Nigerian scammer and your typical American victim is what made the atrociously low success rates worthwhile. The New Yorker article The Perfect Mark is a highly recommended and deeply frustrating read, outlining in excruciating detail how one psychotherapist in Massachusetts lost more than $600,000 and was sentenced to prison.
This scam would not have been as prevalent had there not existed a country brimming with English-speaking people with internet access and living in poverty. Can you think of anything else with internet access that can speak infinite English? Get ready for Nigerian Prince Bot 4000.
Unlike the cybersecurity issue, where large institutions have the capabilities and the incentive to shore up defenses, it's not obvious how individuals targeted by confidence tricks can be protected. Besides putting them in a rubber room, of course. No matter how tightly you encrypt the login credentials of someone's bank account, you will always need to give them some way to access their own account, and this means that social engineering will always remain the prime vulnerability in a system. Best of luck, everyone.
Anyways, AI sounds scary! Especially when wielded by bad people. On the flipside of things, I am excited about all the neat video games we're going to get as AI tools continue to trivialize asset creation and coding generation. That's pretty cool, at least. 🤖
[Originally posted on Singal-Minded back in October & now unlocked. Sorry for telling the normies about this place!]
It's an homage to a philosophical pitfall, but the name is also thematically fitting. It conjures up a besieged underdog, a den of miscreants, an isolated outpost, or just immovable stubbornness.
It's The Motte.
This is an obscure internet community wedded to a kinky aspiration --- that it is possible to have enlightening civil conversations about desperately contentious topics. Previously a subreddit, it finally made the exodus to its own independent space following mounting problems with Reddit's increasingly arbitrary and censorious content policies. The Motte is meant as the proverbial gun-free zone of internet discussion. So long as everyone follows strict rules and decorum, they may talk and argue about anything. At its best, it is the platonic ideal of the coffeehouse salon. This tiny corner of the internet has had an outsize influence on my life and yet despite that, I've always struggled to describe it to others succinctly.
In order to do so, I'll have to explain medieval fortification history briefly. Picture a stone tower, sitting pretty on a hill. It may be cramped and unpleasant, but it's safe. Likely impenetrable to any invasion. This is the motte. One cannot live on a diet of stone fortification alone, and so immediately surrounding the motte is the bailey --- the enclosed village serving as the economic engine for the entire enterprise. The bailey's comparative sprawl is what makes it more desirable to live in, and also what makes it more vulnerable, as it can be feasibly fortified only by a dug ditch or wooden palisade. So you hang out in the bailey as much as possible until a marauding band of soldiers threatens your entire existence and forces your retreat up the hill, into the motte. Bailey in the streets, motte in the event of cataclysmic danger, as the kids might say.
We don't have a lot of real-life mottes and baileys these days, but we do have a rhetorical analogy that is very useful: the motte-and-bailey fallacy. Someone bold enough to assert something as inane as "astrology is real" (bailey) might, when challenged, retreat to the infinitely more anodyne "all I meant by astrology being real is that natural forces like celestial bodies might have an effect on human lives" (motte), and who can argue against that? Once the tarot-skeptical challenger gives up on charging up the rampart, the challenged can peek from behind the gate and slink back to the spacious comforts of the bailey, free to expound on the impact of Mercury in retrograde or whatever without any pesky interruptions. Once you recognize this sleazy bait-and-switch, you'll spot it everywhere around you. Other examples are motte: common-sense gun control; bailey: Ban all civilian firearm ownership. Or motte: addressing climate change; bailey: Voluntary Human Extinction Movement. On and on.
Back to the history of my favorite online community: In the beginning, before The Motte was The Motte, they were the Rationalists (a.k.a. "rat-sphere" or just "rats"). These are a bunch of painfully earnest and lovable nerds unusually mindful about good epistemological hygiene.
Across their odyssey, they gather around various Schelling points, with the blog-cum-encyclopedia LessWrong as one of their most prominent congregation points. Whatever hurdles to logical reasoning (confirmation bias, availability heuristic, or motivated reasoning, to name very few) that you can come up with are guaranteed already to be extensively cataloged within its exquisitely maintained database.
It is understandably suspicious when a group names itself after what is presumed to be a universally lauded value, but you can see evidence of this commitment in practice. My favorite vignette to illustrate the humility and intellectual curiosity of the rat-sphere happened when I attended my first meetup and overheard a conversation that started with "Okay, let's assume that ISIS is correct... " with the audience just calmly nodding along, listening intently.
Even if you don't know about the rats, you may have heard of the psychiatrist and writer Scott Alexander. His blog remains a popular caravanserai stop within the rat-sphere. While his writing output is prodigious in both volume of text and topical scope (everything from mythological fiction of Zeus evading a celestial amount of child-support obligations to a literature review of antidepressant medication), what consistently drew the most attention and heat to his platform were his essays on culture war topics, perennial classics like Meditations on Moloch or I Can Tolerate Anything Except The Outgroup to name a select few.
Culture wars are best understood as issues that are generally materially irrelevant, yet are viciously fought over as proxy skirmishes in a battle over society's values. (Consider how much ink is spilled over drag queen story hours.) But something can be both materially irrelevant and fun. And inevitably, like flies to shit, people were most drawn to the juiciest of topics --- the proverbial manure furnaces that generated the brightest of flames. Scott *tried *to keep all this energy contained to a dedicated Culture War Thread on his blog's subreddit, but the problem was that it worked *too well *in encouraging unusually intelligent and cogent articulations of "unthinkable" positions. In part because Scott has made some enemies over the years, and said enemies have eagerly sought opportunities to demonize him as his star has risen, the internet peanut gallery frequently (and disingenuously) attributed the most controversial opinions on the subreddit to Scott himself. This in turn directed ire at the host for "platforming" the miasma. And so in early 2019, Scott emancipated the thread, and a crew of volunteers forked the idea away onto its own subreddit and beatified it with its new name: r/TheMotte.
Because the space was rat-adjacent from the beginning, it had a solid basis to succeed as an oasis of calm. Even with that advantage, the challenge of building a healthy community almost from scratch should not be underestimated. Props to the moderators, who kept the peace with both negative and positive reinforcement. As you might expect in a community dedicated to civil discussion, you could get banned for being unnecessarily antagonistic or for using the subreddit to wage culture war rather than discuss it.
But equally important was the positive reinforcement part of the equation. If anyone's post was particularly good, you would "report" it to the mods as "Actually A Quality Contribution," or AAQC. The mods collected the AAQC and regularly posted roundups. Consider for a moment and appreciate how radical a departure this is from the norm. The internet has developed well-worn pathways from the constant barrage of wildebeest stampeding to the latest outrage groundswell, famishing to feast on its pulped remains. This machine increasingly resembles one purpose-built for injecting the worst, most negative content into our brains every second of every day. And instead here were these dorks, congregating specifically to talk about the most emotionally heated topics du jour, handing out certificates of appreciation and affirmation.
The AAQC roundups were a crucial component of the community, particularly when they unearthed hidden gems that would otherwise have remained buried. Reddit's down/upvote feature is often ab/used as a proxy for dis/agreement (leave it to the rats to create two-factor voting for internet comments), but the mods made sure to highlight thought-provoking posts especially when they disagreed with them.
Part of the draw was just how unassuming it all was. A small handful of people who wandered in happened to already have well-established writing platforms built elsewhere. But by and large, this was an amateur convention attended by relative nobodies. And yet some of my favorite writing ever was posted exclusively in this remote frontier of Reddit.
The highlights are numerous. How about a grocery store security guard talking about his crisis of faith about modern society that happened during a shift? Or the post that forever changed how I viewed Alex Jones by reframing his unusual way of ranting through the prism of epic poetry tradition? Or the philosophy behind The Motte, where Arthur Chu is cast as the villain? Or how people talk past each other when using the word "capitalism"? Or an extended travelogue of Hawaii's unusual racial dynamics? Or this hypothetical conversation between a barbarian and a 7-11 clerk? Or how Warhammer 40k is a superior franchise to Star Wars thanks in part to higher verisimilitude in its depiction of space fascism? Or this effortlessly poetic meditation on Trump's omnipresence? Or an ethnography of the effectiveness of rifle fire across cultures? Or how the movie Fantastic Mr. Fox straddles the trad/furry divide? Or this catalog of challenges facing a Portland police officer? Or this dispatch from an overwhelmed doctor working during India's horrific second COVID-19 wave? Or a technical warning about Apple's ability to spy on its customers? Or why the major scale in music has such broad multicultural appeal? Or a man brought to tears by overwhelming gratitude while shopping at Walmart? Or how the decline of Western civilization can be reflected in the trajectory of a children's cartoon series? Or how RPGs solved a problem by declaring some fantasy races to be inherently evil only to create another issue? Or how about the potential nobility of --- get this --- indiscriminate retributive homicide from the standpoint of a Chinese military officer going on a shooting rampage after his wife died of a forced abortion?
The structure of the community was such that it gained a sort of natural immunity to trolls. The community was primed to take the arguments trolls made seriously, and this meant drafting intimidating walls of text in earnest. And that wouldn't be the end of it, because you could reliably expect the community to obsess and mull over that same topic for weeks on end, churning out thousands of words more in the process. Most bad-faith actors find it impossible to keep up the charade for that long, and it's just Not Fun™ when a troll's potential victim reacts by obliviously submitting immaculately written essays in reply. Consider an example of the type of discourse that gets prompted by something as wild-eyed as the question of "when is it ethical to murder public officials?". The goal of trolling is to incite immediate, reactive anger, and it must've been dispiriting to enter the space solely to cause trouble, and to slink out having encouraged more AAQCs instead. Anyone dumb enough to try a drive-by bait-and-snark quickly found themselves exhausted and overwhelmed.
Places that explicitly herald themselves as an offshoot to the mainstream quickly gain a reputation as a cesspit of right-wing extremists. Setting aside the question of overall political dominance, it remains true that major institutions (media, finance, tech, etc.) are overwhelmingly staffed by liberal-leaning individuals. Conservatives who feel hounded by the major institutions can opt to carve out their own spaces, and yet nearly every attempt to create the "conservative" alternative to social media giants ends up a toxic waste dump (See Voat, Parler, Gab, etc.).
Scott Alexander described this best when he wrote:
The moral of the story is: if you're against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
So it's unsurprising that people have criticized The Motte for being a den of right-wing rogues. For what it's worth, a survey of the community found the modal user to be a libertarian Hillary Clinton voter. But homogeneous thinking is explicitly not the goal here, and the point of the entire enterprise is to have your ideas challenged. Sterilized gruel is the antithesis of critical thinking and the reason why we need places like The Motte.
That's the backstory, and here's how it impacted me personally.
I've always been insatiably curious. But communicating in writing was a momentous struggle for me. Although I coasted through college, writing assignments were virtually the only source of anxiety for me. I once described the writing process as "struggling to take a painful shit." Eking out anything remotely worthwhile was a cataclysmic struggle. I'd stare at a blank page with dread, draft voluminous paragraphs, find myself meandering into gratuitous prose, delete passages until I forgot the point I was making, and then sift through the remaining dessicated husk wondering why anyone would give a fuck. Years ago, before I found my groove in my current job as a public defender, and outside the veil of school-mandated writing, I had ideations of making a living as a writer. A few more of the above-described painful shit sessions conclusively disavowed me of that delusion.
In contrast, though, talking about ideas came naturally to me very early. I was always indefatigable and relentless and confrontational and (with all due humility) easily ran laps around people who had the misfortune of engaging in discussion with me in real life. Few were surprised that I became a lawyer.
My frustrations with writing never sapped my passion for reading, but consuming others' work left me feeling forlorn about my own inadequacy. It was hard for me to admire prominent writers without also feeling pangs of envy. But browsing The Motte only sharpened my frustration because these weren't big-name writers churning out incredible posts --- they were random nobodies. So when it first started, I mostly lurked and did not write much, because I did not believe I had the requisite caliber to contribute anything worthwhile.
I changed my mind about contributing after getting drunk with a friend in the backyard of a bar while a Bernese dog eyed our uneaten sandwiches. My friend (a bona fide socialist) and I got into a passionate but civil discussion about the ideal contours of free speech. The specific disagreement doesn't matter, because that afternoon reminded me how invigorated I feel by in-person discussions. It dawned on me how I could properly contribute to The Motte. A few weeks later I memorialized my pseudonym with a fresh new account, and my immediate goal was to start a podcast. Naturally, it was called The Bailey.
Our release schedule may not be the most reliable, but we have put out 29 episodes so far (for the record, that's more than the hilarious and informative legal podcast ALAB). In between recording episodes, I wrote posts on The Motte, almost as an afterthought. But the point here is that I wanted to start a podcast because I thought my writing sucked.
I always knew I could anticipate some vociferous pushback at The Motte. The pushback was crucial, as it was the whetstone to my rhetoric. I knew that if I were going to do something as foolish as post on The Motte, I had to be loaded for bear. I'd sling the grenade by hitting "post," but the notifications that followed promised some reciprocated shrapnel. All the better.
Posting on a dusty corner of Reddit about some culture war bullshit was obviously very low-stakes, but then a very curious thing happened: People noticed my stuff. I'm only slightly embarrassed to admit how gleeful I was telling my girlfriend that something I wrote was recognized as an AAQC and included in the roundup. And it kept happening, again and again. Eventually I was picked to be one of the moderators (joining veterans like podcast apprentice Tracing Woodgrains) in a process that mirrored how the Venetian Doge was selected. I realized over time just how much of a gargantuan amount of writing I had absent-mindedly accumulated over the years just by posting on The Motte, and so when I started my own Substack almost a year ago, its only purpose was to find a home for that compendium.
I kept writing there for years, obliviously using its space to workshop my writing craft and barely noticing. It wasn't until some of my writing escaped into the wild earlier this year (assisted by a certain sentient fox) and received recognition by the powers that be that I realized how grateful I am for the precious space cultivated here.
I could not have accomplished any of this without The Motte. I owe that space --- especially the jerks who deigned to disagree with me --- so much.
Listen on iTunes, Stitcher, Spotify, SoundCloud, Pocket Casts, Google Podcasts, Podcast Addict, and RSS.
In this episode, we discuss porn.
Participants: Yassine, Interversity, Neophos, Xantos.
Links:
E016: The Banality of Catgirls (The Bailey)
Is Internet Pornography Causing Sexual Dysfunctions? A Review with Clinical Reports (Behavioral Sciences)
How Pornography Can Ruin Your Sex Life (Mark Manson)
Does too much pornography numb us to sexual pleasure? (Aeon Magazine)
The great porn experiment (TEDx)
Hikikomori (Wikipedia)
The Effects Of Too Much Porn: "He's Just Not That Into Anyone" (The Last Psychiatrist)
Hard Core (The Atlantic)
Recorded 2022-12-18 | Uploaded 2023-01-12
I'm curious about not just what your favorite post is, but also what you think is the GOAT, or perhaps what you think is most illustrative and representative of this space (e.g. what would you show someone to get them intrigued). Please limit your post to only ONE pick and briefly explain why you chose it. This can be from anywhere within the Motte's history thus far, and r/TheThread is a good place to check in case you're having trouble finding something. Asking for a friend.
You know it's really me because who else would care about RSS. Although Reddit was originally built with explicit RSS support the nested nature of the weekly culture war thread required a slight bit of jury-rigging to show only top-level comments. So the RSS URL for the last thread looked like this: https://old.reddit.com/r/TheMotte/comments/wulqxp.rss?depth=1
I tried adding the culture war thread from here into Feedly but it doesn't seem to recognize the format, and instead prompts me to use a paywalled feature to build custom RSS feeds. Can the rdrama code base support RSS?
- Prev
- Next