site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Sooo, Big Yud appeared on Lex Fridman for 3 hours, a few scattered thoughts:

Jesus Christ his mannerisms are weird. His face scrunches up and he shows all his teeth whenever he seems to be thinking especially hard about anything, I didn't remember him being this way in the public talks he gave a decade ago, so this must either only be happening in conversations, or something changed. He wasn't like this on the bankless podcast he did a while ago. It also became clear to me that Eliezer cannot become the public face of AI safety, his entire image, from the fedora, to the cheap shirt, facial expressions and flabby small arms oozes "I'm a crank" energy, even if I mostly agree with his arguments.

Eliezer also appears to very sincerely believe that we're all completely screwed beyond any chance of repair and all of humanity will die within 5 or 10 years. GPT4 was a much bigger jump in performance from GPT3 than he expected, and in fact he thought that the GPT series would saturate to a level lower than GPT4's current performance, so he doesn't trust his own model of how Deep Learning capabilities will evolve. He sees GPT4 as the beginning of the final stretch: AGI and SAI are in sight and will be achieved soon... followed by everyone dying. (in an incredible twist of fate, him being right would make Kurzweil's 2029 prediction for AGI almost bang on)

He gets emotional about what to tell the children, about physicists wasting their lives working on string theory, and I can see real desperation in his voice when he talks about what he thinks is really needed to get out of this (global cooperation about banning all GPU farms and large LLM training runs indefinitely, on the level of even stricter nuclear treaties). Whatever you might say about him, he's either fully sincere about everything or has acting ability that stretches the imagination.

Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion. You can see Eliezer trying to diplomatically offer suggested discussion routes, but Lex just doesn't know enough about the topic to provide any intelligent pushback or guide the audience through the actual AI safety arguments.

Eliezer also makes an interesting observation/prediction about when we'll finally decide that AIs are real people worthy of moral considerations: that point is when we'll be able to pair midjourney-like photorealistic video generation of attractive young women with chatGPT-like outputs and voice synthesis. At that point he predicts that millions of men will insist that their waifus are actual real people. I'm inclined to believe him, and I think we're only about a year or at most two away from this actually being a reality. So: AGI in 12 months. Hang on to your chairs people, the rocket engines of humanity are starting up, and the destination is unknown.

Jesus Christ his mannerisms are weird.

something changed

Odds are it's Adderall. Bay Aryan culture nerds (sounds like an anthropological term, right?) abuse «ADHD meds» to a degree far beyond Scott's «doing spreadsheets» apologia – it's their friend in need when they want to be on top of their game and really make an impression. They write terrible tweets on addy, go to podcasts on addy, livestream coding sessions on addy and make innumerate github commits that end up borking whole repositories on addy. The society needs to teach these people that addy doesn't actually make them smarter, it only makes them feel smarter and act more grandiose, which they get addicted to even harder than to the direct dopamine hit. Once again: nerds aren't all right; consider this paragraph a simulated example of how exactly. Acting like a hyperactive Loony Tunes or anime character is, well… loony.*

That said, I do not endorse the focus on the way Yud looks and acts or even whether he's a narcissist. He doesn't strike me as too unseemly for someone with his background; it incriminates the speaker more than Yud; and it takes away from substantial criticism.

In fact Yud, suddenly and unexpectedly for him cited as top AI researcher, prominent analyst etc. – is, himself, a red herring that distracts from the real issue. The issue being: a coordinated barrage of attacks on proliferation of transformative AI. I've compiled an incomplete chronicle of proceedings; some of those are obviously just journos latching on, but others had to have been in the works for months or at least weeks. This is some spooky bullshit – though, nothing new I guess, after all the shadowy campaigns to fortify the democracy and battling COVID misinformation/narrative shifts.

I think we are seeing ripples from a battle to capture the public endorsement for deceleration vs. unequal acceleration. With one party (which I associate with old-school paramasonic networks) being genuine «decels» who push crippling regulation using Yud and other useful idiots like EAs and assorted international organizations as a front; and the other being a fractured alliance of national, industrial and academic actors who want narrower regulations for all, displacement of the purported threat onto geopolitical and market competitors and open-source community, and token (from their perspective) conditions like ensuring that the AI stays woke for themselves; though I may be completely wrong with my typology. It's reminiscent of the rise of anti-Nuclear groups and Rome Club «limits to growth» fraudulent models, which then mutated into today's environmentalist degrowth movement (recommended reading on Yudkowsky as our era's Paul Ehrlich; do you like our success in defusing the Population Bomb?).

Anyway:

  1. 02/24: OpenAI releases the paper Planning for AGI and beyond which some conclude is unexpectedly thoughtful and the usual subjects (see LW/SSC) pan as not going nearly far enough.

  2. 03/12 Yud: « I'm at the Japan AI Alignment Conference, and they're introducing me as the eminent grandfather of the field of AI alignment…»

  3. 03/13: after weeks or running covertly under the hood of Bing Search, GPT-4 officially launches (sniping the leading safety-concerned lab Anthropic's Claude by a day).

  4. 03/15: Sutskever: «If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise». Reminder that in 2015, in the OpenAI still aligned with Musk, he signed this: «We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible… it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest… As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.» The more cynical logic behind that OpenAI was one of «adversarial equilibrium», which I paraphrase here and which e/acc people articulate better.

  5. 03/25: Altman goes on Fridman's podcast, admits that much of work on GPT-4 was in alignment, and drops some of Yud's takes like «we don't know how to align a super powerful system». Lex and Sam discuss Yud's AGI Ruin, apparently, at 55 minutes mark, at 1:11 Sam suggests that opensource LLMs with no safety controls should be regulated away or detected through vetoed «aligned» ones. Note though that Sam is at peace with the idea that there will be multiple AGIs, while Yud thinks 1 is as many as we can afford (ideally 0 for now). Sam mentions Scott's Moloch at 1:16.

  6. 03/29: the UK Government publishes a recommendation document on AI policy «to turbocharge growth» called # AI regulation: a pro-innovation approach.

  7. Future of Humanity letter signed by Musk, Harari, Yang etc; safetyists like Geoffrey Miller admit that 6 months is only to get the ball rolling.

  8. 03/29: the open-source AI org LAION petitions for the creation of a «CERN for AI research and safety», gathers signatures of random EU redditors.

  9. 03/29: Yud's childish piece in TIME Ideas where he suggests an international treaty to physically destroy datacenters training serious AIs. Interestingly I've heard that Time proactively reached him.

  10. 03/30: Yud hyperventilating on Fridman.

  11. 03/30, Fox News’ reporter uses all his time at the White House to advance Yud's ideas.

  12. 03/30 continues: «UNESCO Calls on All Governments to Implement AI Global Ethical Framework Without Delay»: «This global normative framework, adopted unanimously by the 193 Member States of the Organization…»

  13. 03/30 unlimited decel policy works: The DC tech ethics think tank Center for Artificial Intelligence and Digital Policy asks FTC «to stop OpenAI from issuing new commercial releases of GPT-4».

  14. 03/31: Italy bans ChatGPT over specious privacy concerns.

  15. 04/01, BBC: Should we shut down AI? Inconclusive.

  16. 04/02, fucking RussiaToday: «Shutting down the development of advanced artificial intelligence systems around the globe and harshly punishing those violating the moratorium is the only way to save humanity from extinction, a high-profile AI researcher has warned».

I see some AI researchers on twitter «coming out» as safetyists, adopting the capabilities vs. safety lingo and so on.

On the other hand, many luminaries in AI safety are quiet, which may be due to the fact that they've moved over to the OpenAI camp. Some are still active like Richard Ngo but they're clearly on board with Sam's «we'll keep pushing the frontier… safely» policy. Aaronson here too.

Another curious detail: in 2021, Will Hurd, «former clandestine CIA operative and cybersecurity executive», joined OpenAI's board of directors. I like this theory that explains the $10B investment into, effectively, GPT training as a spook/military program. I've also updated massively in favor of «Starship is FOBS system» arguments made previously by @Eetan, AKarlin and others.

All in all, it feels like the time for object-level arguments has passed. What decides the result is having connections, not good points.


* As a transhumanist, I understand the aspiration. But as someone who flirted with stims for a while and concluded that, in the long run, they very clearly only make me more cocksure, loquacious (as if any more of that were needed) and welcoming routine to the point of pathological repetitiveness, I endorse @2rafa's recent writeup. This isn't the way up. This isn't even keeping level with the baseline performance in interesting ways.

My impression is that people like Ngo are quietly pushing inside of OpenAI for slowdowns + coordination. The letter was basically a party-crashing move by those peripheral to the quiet, on-their-own-terms negotiations and planning by the professional governance teams and others who have been thinking about this for a long time.

I think most of the serious AI safety people didn't sign it because they have their own much more detailed plans, and also because they want to signal that they're not going to "go to the press" easily, to help them build relationships with the leadership of the big companies.

I have actual ADHD, so I need stimulants to be productive when actual executive function is needed (exams for one, and those never end for doctors). People with it supposedly benefit more from the drugs than neurotypicals do, even if they still improve the focus of the latter.

That being said, I strongly suspect that your negative attitude towards stim use in the Bay Area is simply selection effects. You notice the stimmed out people making a fool of themselves in public, and don't see the thousands-millions of others quietly using them at reasonable doses to boost their productivity. I'm going to go with Scott on this one, it fits with my own personal experience.

Well, for what an anecdote is worth, +1 for "social side effects".

I probably have as little executive function as you can have and still survive in academia (with most of my planning for work amounting ot "how many all-nighters will I need to meet the minimum standard here"), and otherwise spend most of my days (or rather nights, as I rarely wake up before the afternoon). I also used to be an obnoxiously hyperactive kid up until some point, and in particular had a pattern of getting myself in trouble in school up until 6th grade or so when the teacher would call on someone, that someone would struggle to answer or waffle around and then I couldn't resist and would just blurt out the answer out of turn. I knew this was looked upon unfavourably and defeated the point of the pedagogical technique, and even took more than one penalty F from exasperated teachers for it, but still, most of the time that situation happened, the urge to do so again - fueled by some mixture of impatience, irrational irritation that the teacher didn't just call on me so the class could move faster, and most dominantly some kind of screaming neural circuitry that made me imagine standing there in place of the person being called and being tongue-tied about giving the answer - was too overwhelming. As I grew up, this just stopped at some point, and I sunk into the perpetual fog of my current existence.

So anyway, at some point in grad school, I borrowed from the Adderall stash of an American fellow student (with an on-brand official ADHD diagnosis and everything) and went to some young assistant prof's theory seminar/class - the youth is relevant insofar as no old hand would actually bother calling on individuals with trivia questions in a graduate course - and was shocked to feel that exact same urge, which at that point I hadn't experienced in 15 years, welling up in me again every time someone was struggling with a question. Luckily the benefit of age and experience let me leave it at shifting around in my chair uncomfortably rather than actually shouting out the answer, but I have no illusions that it was damn close. Twice the dosage may well have been enough to actually make me embody the "bristly arrogant asshole that's all INT and no WIS" persona that Silicon Valley seems to be famous for.

I can chime in and say that I've also noticed no change to my imagination or creativity while on meds, in fact my creative output has increased as I can now actually get things done and then move onto new creative enterprises.

My experience with medication and the discourse around adhd medication in particular, seems to be that there is a fairly broad spectrum of experiences that people have with these drugs and that people really like generalising their own experiences as The Golden Standard (other people are clearly lying, or are taking their drugs wrong) and from there it's a coin flip if they'll then start to morally grandstand.

It is also my experience that most discussion around adhd medication is one of morality and virtue instead of practicality and utility. All those who have an interest in transhumanism and human augmentation should take note, this is one of the tributaries of that later torrent.

No, not that I've ever noticed!

I'm not a visually creative type, having peaked at stick figures, but I would say I'm a good writer. And a great deal of my better reddit posts, including like half my AAQCs, were written when I was tired of studying but still hasn't had the meds wear off.

That's not the reason I don't use Ritalin on a more regular basis, even the lowest sustained release formulation gives me palpitations and I then need to take l-theanine or other anxiolytics to deal with it. Simply too much of a hassle unless I have exams to worry about, but I intend to try and switch to Vyvanse or Adderall in the UK if I can get my hands on it. That's why I was so stricken by the high costs of psych consultations (by my poor third world standards haha).

As someone who peaked slightly higher than stick figures, meds definitely make it easier to practice lots of repetitive wrist movements. My daydreaming also becomes more vivid, and I can write more words. Meds tend to drive me more to the execution side of things and maybe terminate my endless tangents of information gathering.

I don't often agree with daes, but I strongly endorse his description of the end-point of amphetamine use on people's thought processes. I find the idea that some people are magically different in this regard fairly absurd -- with the caveat that low doses probably aren't that bad and are more likely to be maintained as such when it's given as 'medicine from a doctor' than 'bennies from my dealer'.

But I don't have much confidence that EY (or SBF for that matter) have been particularly strict in keeping to low theraputic-type doses -- that's the whole problem.

People with it supposedly benefit more from the drugs than neurotypicals do

The evidence is pretty strong for that supposition, no? I certainly believe that stimulants help people with executive dysfunction and/or ADHD (or rather ADD; there's a suspicion that H is added simply to rein in boys who can't tolerate the inanity of the normie classroom. But whatever). I'm just observing people who were highly productive before getting access to stims becoming weird hyperactive clowns. And it's not selection effect when we're actually talking about self-selected people vying for public attention.

That said: Americans love to medicate, and solve lifestyle (hate that philistine word, but what would be better?) problems medically in general. Anxyolitics, stims, opioids, lyposuction, HRT, surgery, if there's a shortcut to «enhance» or «improve» yourself with a bit of money and chemistry and professional customization, they'll do it, often with garish consequences. Yud himself is on record as not trusting in the behavioral route to weight loss, so I am pretty confident here.

Maybe this explains some of the distaste for transhumanism in our more conservative American friends.

The feigned tryhard stoicism in the face of physiological limits is often unseemly too, of course.

I'm hedging because I have never personally prescribed stimulants (other than sharing my own with friends and family, and informally diagnosing some of them for later followup with a psych). AHDH is also next to unknown in India, which is personally surprising because I'm well aware of how hyper-competitive academics are here, such that in a vacuum I'd expect everyone and their dog to be desperate to get their children on them if they believed it would boost their performance.

As such, my actual IRL exposure to other people with ADHD is near nil, so I'm forced to rely on anecdotes from people/psychiatrists like Scott himself, and from a casual familiarity with online nootropics/stimulant/recreational drug communities.

I'm just observing people who were highly productive before getting access to stims becoming weird hyperactive clowns. And it's not selection effect when we're actually talking about self-selected people vying for public attention.

I see, that didn't come across to me! Thanks for clarifying, although I still do think some selection bias is at play, especially given the frequency for stimulant usage in the US.

Anxyolitics, stims, opioids, lyposuction, HRT, surgery, if there's a shortcut to «enhance» or «improve» yourself with a bit of money and chemistry and professional customization, they'll do it, often with garish consequences

I must be a true red-blooded 'Murican at heart, because I fully sympathize with their position haha. I know that simply "trying harder" didn't work for me, and I see nothing inherently immoral about taking obvious shortcuts, as long as you're aware of the pitfalls. But does that surprise you as mutual transhumanists? I wouldn't think so, that's just how we're wired.

Again: I understand the aspiration. There's no naturalistic moralizing here (though I do have a somewhat aesthetic problem with most psychoactive substances because they, uh, just erase information, like when you play with curves in Photoshop. Pretty sure I've written on it in the past but search's a bitch).

It's just, current-gen commodity interventions have simple crude mechanisms of action, a pretty low ceiling, tradeoffs, and, when abused, easily push you below the baseline. They make a tantalizing but largely unwarrantable promise. I've known people, transhumanists in fact, who fried their brains with «biohacking» based on telegram bro advice – especially as they went past their early 20s.

The boomerish «just sleep/exercise/play sports/socialize/meditate/eat well/have sex, kiddo» dictum is actually the most reliable biohacking you can do right now, and it's been this way forever. It would certainly have helped Yud look better.

The catch, of course, is that good lifestyle is a signal of higher class because it actually requires tons of executive function, and that is genuinely lacking in many, and so this is another justification for stims.

I want biohacking that works. It's not remotely that simple yet.

Depends on what you're aiming to hack IMO.

For example, we finally have safe and FDA approved drugs for weight-loss, and my priors suggest that judicious use of stimulants can make you effectively superhuman in some capacity without any real downsides as long as you cycle and take tolerance breaks.

Still, it's an undercooked field, and I have little doubt that you can do serious damage by experimenting without knowing what you're doing, and just knowing isn't sufficient to stop unexpected tragedies to boot.

(I didn't think you were moralizing, I know you better than that haha)

For example, we finally have safe and FDA approved drugs for weight-loss

Available only in the USA AFAIK. I even purchased prescription for it (Ozempic) - months ago. But it's available either nowhere or in some random pharmacy several hundred kilometers away.

Well, I also purchased Tirzepatide from some online shop selling it as research chemical. I didn't get around to using it yet, probably because I fear disappointment in case it's fake as it cost ~$500 for 5x5mg doses :|

Scott did a deep dive into it a little while back, and it has expanded availability outside the US by this point!

I'm sure the costs will plummet further, nothing benefits from economies of scale like a drug for weight loss haha