site banner

Culture War Roundup for the week of February 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

15
Jump in the discussion.

No email address required.

Over the last few months, I've followed someone named Alexander Kruel on Substack. Every single day, he writes a post about 10 important things that happened that day - typically AI breakthroughs, but also other of his pet concerns including math, anti-wokeness, nuclear power, and the war in Ukraine. It's pretty amazing that he is able to unfailingly produce this content every day, and I'm in awe of his productivity.

Unfortunately, since I get this e-mail every morning, my information diet is becoming very dark.

The advances in AI in the last year have been staggering. Furthermore, it seems that there is almost no one pumping the breaks. We seemed doomed to an AI arms race, with corporations and states pursuing AI with no limits.

In today's email, Kruel quotes Elizier who says:

I've already done my crying, late at night in 2015…I think that we are hearing the last winds start to blow…I have no winning strategy

Elizier is ahead of the curve. Where Elizier was in 2015, I am now. AI will destroy the world we know. Nate Soares, director of MIRI, is similarly apocalyptic.

We've give up hope, but not the fight

What comes after Artificial General Intelligence? There are many predictions. But I expect things to develop in ways that no one expects. It truly will be a singularity, with very few trends continuing unaltered. I feel like a piece of plankton, caught in the swells of a giant sea. The choices and decisions I make today will likely have very little impact on what my life looks like in 20 years. Everything will be different then.

So, party until the lights go out? How do I deal with my AI-driven existential crisis?

I'm a doctor, relatively freshly graduated and a citizen of India.

Back when I was entering med school, I was already intimately aware of AI X-risk from following LW and Scott, but at the time, the timelines didn't appear so distressingly short, not like Metaculus predicting a mean time to human level AGI of 2035 as it was last time I checked.

I expected that to become a concern in the 2040s and 50s, and as such I was more concerned with automation induced unemployment, which I did (and still do) expect to be a serious concern for even highly skilled professionals by the 30s.

As such, I was happy at the time to have picked a profession that would be towards the end of the list for being automated away, or at least the last one I had aptitude for, I don't think I'd make a good ML researcher for example, likely the final field to be eaten alive by its own creations. A concrete example even within medicine would be avoiding imaging based fields like radiology, and also practical ones like surgery, as ML-vision and softbody robotics leap ahead. In contrast, places where human contact is craved and held in high esteem (perhaps irrationally) like psychiatry are safer bets, or at least the least bad choice. Regulatory inertia is my best, and likely only, friend, because assuming institutions similar to those of today (justified by the short horizon), it might be several years before an autonomous surgical robot is demonstrably superior to the median surgeon, and it's legal for a hospital to use them and the public cottons onto the fact that they're a superior product.

I had expected to have enough time to establish myself as a consultant, and to have saved enough money to insulate myself from the concerns of a world where UBI isn't actually rolled out, while emigrating to a First World country that could actually afford UBI, to become a citizen within the window of time where the host country is willing to naturalize me and thus accept a degree of obligation to keep me alive and fed. They latter is a serious concern in India, volatile as it already is, and while I might be well-off by local standards, unless you're a multimillionaire in USD, you can't use investor backdoors to flee to countries like Australia and Singapore, and unless you're a billionaire, you can't insulate yourself in the middle of a nation that is rapidly melting down as its only real advantage, cheap and cheerful labor, is completely devalued.

You either have the money (like the West) to buy the fruits of automation and then build the factories for it, or you have the factories (like China) which will be automated first and then can be taxed as needed. India, and much of South Asia and Africa, have neither.

Right now, it looks to me that the period of severe unemployment will be both soon and short, unlikely to be more than a few years before capable nearhuman AGI reach parity and then superhuman status. I don't expect an outright FOOM of days or weeks, but a relatively rapid change on the order of years nonetheless.

That makes my existing savings likely sufficient for weathering the storm, and I seek to emigrate very soon. Ideally, I'll be a citizen of the country of my choice within 7 years, which is already pushing it, but then it'll be significantly easier for me to evacuate my family should it become necessary by giving them a place to move to, if they're willing and able to liquidate their assets in time.

But at the end of the day, my approach is aimed at the timeline (which I still consider less likely than not) of a delayed AGI rollout with a protracted period of widespread Humans Need Not Apply in place.

Why?

Because in the case of a rapid takeoff, I have no expectations of contributing meaningfully to Alignment, I don't have the maths skills for it, and even my initial plans of donating have been obviated by the billions now pouring into EA and adjacent Alignment research, be it in the labs of the giants or more grassroots concerns like Eleuther AI etc. I'm mostly helpless in that regard, but I still try and spread the word in rat-adjacent circles when I can, because I think convincing arguments are >> than my measly Third World salary. My competitive advantage is in spreading awareness and dispelling misconceptions in the people who have the money and talent to do something about it, and while that would be akin to teaching my grandma to suck eggs on LessWrong, there are still plenty of forums where I can call myself better informed than 99% of the otherwise smart and capable denizens, even if that's a low bar to best.

However, at the end of the day, I'm hedging against a world where it doesn't happen, because the arrival of AGI is either going to fix everything or kill us all, as far as I'm concerned. You can't hide, and if you run, you'll just die tired, as Martian colonies have an asteroid dropped on them, and whatever pathetic escape craft we make in the next 20 years get swatted before they reach the orbit of Saturn.

If things surprisingly go slower than expected, I hope to make enough money to FIRE and live off dividends, while also aggressively seeking every comparative advantage I can get, such as being an early-ish adopter of BCI tech (i.e. not going for the first Neuralink rollout but the one after, when the major bugs have been dealt with), so that I can at least survive the heightened competition with other humans.

I do wish I had more time, as I genuinely expect to more likely be dead by my 40s than not, but that's balanced out by the wonders that await should things go according to plan, and I don't think that, if given the choice, I would have chosen to be alive at any other time in history. I fully intend to marry and have kids, even if I must come to terms that they'll likely not make it past childhood.. After all, if I had been killed by a falling turtle at the ripe old age of 5, I'd still rather have lived than not, and unless living standards are visibly deteriorating with no hope in sight, I think my child will have a life worth living, however short.

Also, I expect the end to be quick and largely painless. An unaligned AGI is unlikely to derive any value from torturing us, and would most likely dispatch us dispassionately and efficiently, probably before we can process what's actually happening, and even if that's not the case and I have to witness the biosphere being rapidly dismantled for parts, or if things really go to hell and the other prospect is starving to death, then I trust that I have the skills and conviction to manufacture a cleaner end for myself and the ones I've failed..

Even if it was originally intended as a curse, "may you live in interesting times" is still a boon as far as I'm concerned..

TL;DR: Shortened planning windows, conservative financial decisions, reduction in personal volatility by leaving the regions of the planet that will be first to go FUBAR, not aiming for the kinds of specialization programs that will take greater than 10 years to complete, and overall conserving my energy for scenarios in which we don't all horribly die regardless of my best contributions.

Martian colonies have an asteroid dropped on them, and whatever pathetic escape craft we make in the next 20 years get swatted before they reach the orbit of Saturn.

In 20 years the AGI apocalypse will not be nearly as romantic as that. It is much more likely to look like a random bank/hospital sending you a collections notice for a home loan/medical treatment you definitely didn't agree to, bringing you to court over it, and putting you up against the equivalent of a $100M legal team. The AI-controlled Conglomerate wins in court and you spend the rest of your life subsistence farming as a side gig while all your official income is redireted to the AI Conglomerate.

For extra fun, if you are married, social media and increasing economic struggle poison your relationship with your spouse and both of you apply for the services of AI Legal. The hotshot AI Legal representatives fight acrimoniously, revealing every dark secret of both you and your spouse, and successfully breaking apart your marriage in divorce settlement. Honestly, you don't remember why you ever loved your ex-spouse, or why your children ever loved you, and you totally understand your real-world friends distancing themselves from the fiasco. Besides, you don't have time for that anymore. Half your salary is interest on the payment plan for AI Legal.

As a smart and independently wealthy researcher, you look into training your own competing, perhaps open-source AI model to fight back against the Machine, but AI Conglomerate has monopolized access to compute at every level of the supply chain, from high-purity silicon to cloud computing services. In despair, you turn to old web and your old haunt The Motte, where you find solace in culture war interspersed with the occasional similar story of despair. Little do you know that every single post is authored by AI Conglomerate to manipulate your emotions from despair into a more productive anger. Two months later you will sign up to work for a fully-owned subsidiary of AI Conglomerate and continue working to pay off your debts, all while maximizing "shareholder" output.

That sounds like a far more subtle alignment failure than I consider plausible, though I'm not ruling it out.

A superhuman AGI has about as little reason to subvert and dominate baseline humans as we do for attempting to build an industry off enslaving ants and monkeys. I'm far more useful for the atoms in my body, which can be repurposed for better optimized things, than my cognitive or physical output.

I'd go so far as to say that what you propose is the outcome of being 99% aligned, since humans are allowed to live and more or less do human things, as opposed to be disassembled for spare parts for a Von Neumann swarm.

My goal in writing these stories was to capture how AI set up to maximize profit could fuck over the little guy by optimizing existing business processes. I think that's more likely than anything else.