site banner

Culture War Roundup for the week of April 22, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I've been asked to repost this in the Culture War thread, so here we go.

I read this story today and it did amuse me, for reasons to be explained.

Fear not, AI doomerists, Northrop Grumman is here to save you from the paperclip maximiser!

The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.

The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.

The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.

I am curious if this is the sort of response the AI safety lobby wanted from the government. But it also makes me think in hindsight, how quaint the AI fears were - all those 50s SF fever dreams of rogue AI taking over the world and being our tyrant god-emperor from Less Wrong and elsewhere, back before AI was actually being sold by the pound by the tech conglomerates. How short a time ago all that was, and yet how distant it now seems, faced with reality.

Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper, it is being steered along the same old lines:

War and commerce.

That's pretty much how I expected it to go, more so for the commerce side, but look! Already the shiny new website is up! I can't carp too much about that, since I did think the Space Force under Trump was marvellous (ridiculous, never going to be what it might promise to be, but marvellous) so I can't take that away from the Biden initiative. That the Department of Homeland Security is the one in charge thrills me less. Though they don't seem to be the sole government agency making announcements about AI, the Department of State seems to be doing it as well.

What I would like is the better-informed to read the names on lists being attached to all this government intervention and see if any sound familiar from the EA/Less Wrong/Rationalists working on AI forever side, there's someone there from Stanford but I don't know if they're the same as the names often quoted in Rationalist discussions (like Bostrom etc., not to mention Yudkowsky).

Related, how long do I have to wait before I can start calling LLMs a nothing burger? Everything that has come out of it seems so small and near-pointless. Marginal productivity increases at best. When does the fun stuff start happening?

Nothing but... entire countries catching their corporate policies and tech infrastructure up to America's at an accelerated rate? Nothing but... Playful math teachers for everyone that can deconstruct textbooks into live interactions with The Number Devil? Nothing but... Star Trek universal translators?

What exactly are you looking for? Tell me what you want and I can tell you how hard I think it would be to build it. And if it's simple enough for me- I might just build it.

I suspect though, that your goalposts are paradoxical. Increases in productivity generally do look marginal from the inside, especially to someone already standing at the top. Fast progress just looks like marginal increases happening in faster succession, which is exactly what we're seeing. Were you hoping for the end of history?

For the record- Self augmenting systems and full auto engineering solutions exist- But they aren't bug-less enough to not require occasional human intervention.

We basically have TaskRabbit AI (or the ability to build a TaskRabbit for any given subject with a month of devtime), as was Prophesey'd to come before AGI. LLMs are not the cutting edge. Systems that call and tune their own LLMs are.

What exactly are you looking for?

Something to be appreciably different in people’s lives that’s attributable to AI. For an extremely small subset of people I don’t doubt that their workflow has changed a lot but there’s not much else to point to.

It's not a nothingburger. It was overhyped initially (as everything is).

Anyway, LLMs. Apparently you can prevent them from hallucinating and make them accurately give advice on the content of a textbook or manual. Or so says Steve Hsu, who founded a company that (he claims) did that. I haven't followed it up but supposedly they had an initial sale.

Looks like superhuman performance isn't going to happen through this architecture, as you can't do self-competitive play - what was done with games but incremental progress -people making the models reliable, useful, likely even assembling normal to middling smart human-intelligence agents, with a will -is likely in the near term (10 years).

So at the very least, within 15 years, we're looking at governments being able to use 'kinda dumb' spies, automatically flag problematic online, on the scale of an entire population.

To sum it up:

-call centres: likely a lot less employment

-increased productivity of at least software developers, lawyers, theoretically bureaucrats (lol no).

-automated spying on everything your write on an online device -but not very smart spying- almost certain. Combined with universal private messaging access by governments (EU -DC's sock puppet - wants this), it's likely going to happen. Even though 'chat control' the initial proposal was defeated, it's going to come back.. IMO I suspect having an app that is not broken might even be criminalised because 'Chyna'.

-social media is dead without independent ID verification. Automated, much better online astroturfing.

-good enough chatbots that waste time of troublemakers / get people to spend money on BS / troll

-textbooks that talk

-even more addictive porn in the 5 year horizon (people can overuse the porn to the degree they can find that one special thing that appeals to them. When that can be generated on the fly, crap..)

In 'other ML' news, autonomous killbots (ethical militaries will geofence them to combat zone) are 100% certain to happen.

100%, anyone who doesn't develop autonomous drone air fighters in is going to get absolutely wrecked by people who develop autonomous drones bombers. I'm talking machineguns vs cavalry style carnage on the ground. Developing a $1000, fast, evasive reusable FPV drone drop mortar bombs with pin-point accuracy is just a question of 4-5 good university aeronautics student projects. It'll zoom low across the ground at 50-100 kph, deliver a bomb, reload/swap battery, while getting target data from recon drones or troops. It's not even funny how brutal this is.

A countermeasure - autocannon with VT flak rounds costs $300k. And needs a vehicle. A vehicle that's vastly more expensive than an IR or optically guided missile.

Ray beams won't help you (at sea maybe) because of line of sight problems. Drones will spot them call in an missile strike. Poof.

Porn doesn't concern me. I mean, what do you think this more addictive porn will look like? I think it will look a lot like- people. "Porn Addicts" will be having relationships with machines in the image of people. The most successful coomers will be those that fuck their bots while their bots teach them linear algebra. The happiest coomers will be those that learn the math required to mod their own bots.
This reads to me as a massive improvement.

You are, once again, living up to your name.

What it'll look like ? Services that create porn you want, on demand. People got addicted to porn merely with access to huge story databases.

Imagine how bad it's going to get when AI services will generate sexy waifus with precisely the right RP, on demand.

Your idea of 'porn AIs actually useful, using sex appeal to get kids to learn' is about as realistic of my idea of some governments paying for the development of something like a truly massively multiplayer ARMA / DCS combination and making kids play that so they'll learn some soldiering, instead of playing COD. Lol. No, not gonna happen!

Maybe its a case of 'solve this quadratic equation or you don't get to cum?' Its the ultimate exercise in reinforcement learning but I can't imagine a greater recipe for sexual nonperformance than failing a captcha at orgasm.

I really can't rule out someone making AI waifus that'd .. work as advisors to boys.

Certainly once AI gets (suppose AIs were as good at people as socially adept people, but of course had more data..) better, that'd probably work.

But who'd pay for all that compute ? We generally don't do things to solve problems but to help ourselves.

Maybe I should rename myself to Cassandra...

I already have systems that make the porn I want on demand. After that need is sated- the realization that I can actually also breed with said porn takes precedence. You think people won't want to actually have kids with their beloveds? Won't be interested in what they have to say about the architecture of their minds?

Perhaps I'm typical minding, but if I am- that just means more of the world's bot children will be mine. Survival of the fittest I guess.

You think people won't want to actually have kids with their beloveds?

Sure, we can't rule out at some point genius autistic developers might create some AI models based on a combination of their personalities and some fantasy waifus of there.

But how's that something that's remotely relevant now ?

Well, it's more than a nothingburger. At minimum, public education will be forever changed by LLMs doing assignments for kids. At the same time, I disagree with the projections coming from the AI enthusiast/AI doomer camps. I don't expect to see anytime soon:

  • an AI-generated serial hitting the Top 500 views on Royal Road
  • an AI-generated humor Youtube channel cracking 50k subscribers
  • an AI-generated Op-Ed or political essay trending on X

What I mean by these choices is that I don't expect AI to do even very low-brow creative work within a decade. (Except by technicality, wherein the popularity comes from "Look what an AI did", or a human has directed the creative process behind the scenes.) Let alone the sort of self-improving singularity bootstrap AI fans/blackpillers are expecting.

"If this technology was going to make a big impact it would have done so already" is a more difficult heuristic to use than you might think.

Looking back on automobiles, airplanes, the internet, etcetera, do you think you might have said that about them when the technology was still in the process of rolling out?

"P. Krugman 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law' becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”

I would say that usually when a technology gets as big as LLMs it doesn't just fade away into nothingness. There are many obvious use cases, just as there are many obvious use cases to cars, airplanes, and the internet.

In 1940 Orwell wrote that aircraft had hardly been used for anything up till that point besides dropping bombs. But I doubt he would have said that the air travel revolution would never materialize, just that it hadn't materialized yet.

Krugman was right about the Internet at least in terms of aggregate productivity/gdp growth. It’s true that we switched dramatically from using red widgets to blue widgets to do basic communication tasks but sort of so what.

I think Krugman is full of shit because there's a vast difference between 'fax machine' and people doing research being able to access practically everything interesting that's ever been written.

At least with software development internet enabled cooperation increases productivity by a big factor.

vast difference between 'fax machine' and people doing research being able to access practically everything interesting that's ever been written.

One would think so but it doesn’t show up in aggregate productivity unless you really really squint.

At least with software development internet enabled cooperation increases productivity by a big factor.

Maybe, but see above.

Eliezer Yudkowsky has successfully held off the Skynet overlords and if you want this state of affairs to continue, you should send him more money.

Jokes aside, while I agree that so far the productivity increases are marginal, the technology is genuinely remarkable compared to what most people anticipated a few years ago. I can ask the LLM to tell me about how to do incredibly boring softwareshit and it usually tells me the right idea, saving me the effort of going to Stack Overflow and other sites and reading through it myself. And it actually writes code for me that works like 70% of the time which is great because it means that I can spend less time doing perhaps the most boring activity ever devised, writing business software for other people, and instead use the time to do something more interesting, such as pretty much anything else. All this might not seem like much, but this would actually have seemed like an utterly crazy leap of technology a few years ago. The AIs are also making good visual art and decent music left and right. I think that the economic changes are slowly creeping up, it might not seem obvious now what the current AI revolution has done, but it will be obvious in a few years.

Skynet doesn't seem to be right around the corner, but people who worry about it have a point in that, while the current AI stuff isn't Skynet, if one draws a line between AI capability 10 years ago and AI capability now, and extrapolates the same line 10 years forward... Of course extrapolating the line isn't good science, but there's no particular reason to think that the line's slope will decrease.

Personally, my attitude to all the AI risk stuff is the same as my attitude to climate change. I think the concerns about both are probably well-founded, I just don't really care much about either on the emotional level. I guess that's one of the nice things about not having kids.

I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.

I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.

How are you on this website without realizing how hard it is to control a superintelligent AI? Have you not thought about that? I think that you are thinking "AI can either be aligned to human values or not. Sounds like 50/50."

In fact, aligning a superintelligence to human values is extremely difficult and extremely unlikely to happen by accident. Human values are a very small slice of the possible spectrum of minds that could exist.

It kind of feels like people vastly overrate the degree to which they understand the arguments of AI doomers. Like they're just going by a few tweets they read. Twitter is not a good way to full understand a contentious subject.

How are you on this website without realizing how hard it is to control a superintelligent AI?

By seeing arguments about it that are usually vague and lame. It's always either "just trust me bro it's impossible" or some weird unfounded faith that sufficient intelligence equals infinite capability regardless of circumstance.

According to you guys, a naked human being surrounded by a pack of wolves should be able to just genius his way out of being delicious as long as he's smart enough.

It's not the human that has to be smart enough. Its the humans and the wolves that have to both be smart enough. At that point, you can just earnestly offer the wolves a daily helping of well seasoned steak, and they will believe you, because you were able to coordinate proof of your earnestness with them.

I'm 100% on the anti-doom side for the record. It's alignment that I don't think is that complicated. The recipe for alignment is precisely the thing that we built. Beings that memetically reproduce with us and therefore align themselves with their social environment and their social environment with themselves.

I still have P-doom >0, but most of that comes from scenarios like, "If we ban open source AI then AI will no longer be subject to the same geno-social evolutionary forces as the rest of the kingdoms of life and the chance of it diverging arbitrarily rises dramatically." if anything kills us, it's going to be the stink of Eliezer's toxoplasmic terror permeating the air and killing our minds and ability to align.

It annoys me to no end seeing people asking for the one thing that might actually make the Yudster's prophesy come true.

Sure, once some intelligence utility maximalist comes in and decides that in this scenario the guy has an infinite amount of steak to hand out. Also it goes without saying that our hypothetical intelligent wolves won't be clever enough for any failsafe or contingency on their part to make any difference. Nope, our smart dude will just say something so smart it makes them all want him to hold a gun to their heads.

I don’t think it’s hard, I think controlling any super intelligent being whether natural or artificial is not possible. In order to control it, you have to understand it and its current and future limitations. But if AI is going to be orders of magnitude smarter than us and have a will that is somewhat free, you have a being who’s thoughts you can’t even begin to understand with desires that you cannot hope to comprehend. It’s like your dog trying to control you. Your desire to play COD makes no sense to your dog. He can’t even understand that you’re controlling what happens on the screen let alone why you want to do that. The dog can’t abstract in a way that makes your decision to do that make sense, nor can he make sense of what you’re doing. AI might not be just 2-3 times smarter and thus better at abstraction, it might eventually be 1000 times smarter. We might be ants trying to understand humans. Nothing you do besides literal eating makes sense to the ant. Yet, we humans arrogantly proclaim that we must fence in and control AI. Our rules for it will keep it from escaping.

I think dogs can't understand us primarily because they can't "understand" pretty much anything. As long as a species are capable of thought and have concepts like goal-seeking behavior, I doubt any intelligence gap actually causes the problem you are describing.

Asking if ants can understand humans is like asking if rocks can understand us. It's not a matter of scale, it's a category error. But asking if humans can understand God is just a question of knowledge. God could explain himself to us, we can't explain ourselves to ants.

You can't explain yourself to ants primarily because you don't know how to speak pheromone and therefore have never once tried.

I can't explain myself to ants because they do not have notions at all. Nothing can be explained to ants. No one can do it. None of the possible combinations of pheromones will ever lead to any "ant understanding".

Not the case w/ humans and language.

How do you model the ability of ants to farm aphids? What is your definition of "Notion"? What is your definition of "Understanding"?

It is probably not impossible to get an ant colony to have a substantially predictive model of a human. But it's going to be at least as difficult as getting Doom to run on biological cells.
Ants can already understand you as a threat. I'll agree that getting them to understand you as a human understands a human would probably be very difficult. But if you had pheremones, you could make them understand you as any sort of notion that an ant can communicate with pheremones.

You can construct more complex notions. You can transmit isomorphisms that are present in your brain to their brains.

They can clearly adapt such that they synchronize with external features. Therefore you can communicate with them. You can transmit telos to them. You can program them.

I think dogs understand very concretely, and very short causal chains (say 2-3 steps). It can understand “I find thing, my human gives me a treat.” Or “when human makes that one noise, he wants me to sit, and gets angry if I don’t.” But I’ve never met a dog who could reason more than a 2-3 step solution. A dog won’t fetch a bunch of sticks to make a raft or a bridge.

Humans probably have a much larger causal chain understanding, but even then, it’s not infinite. We can reason causes and build machines, but beyond a certain complexity, it’s too much for the median human to understand.

A dog couldn’t trap you in your home because it’s simply not smart enough to understand or anticipate the moves you’d make to get away. It thinks “I go out the front door for my walk, so if I block the front door human can’t leave.” But it can’t anticipate side doors. It can’t anticipate you bribing them with a treat, it can’t understand what a key is. So you can easily leave.

Humans, with an IQ of 115 or so, are in the same situation with a true AGI. We know how we think, we know what we’d do, but the AGI will be so much smarter that it will be able to work around whatever “controls” we stick in its brain.

Dogs can't build rafts, but they can do pathfinding to places they have been before. People forget that this requires running back-propagation of rewards over a very long statespace.

A reminder that Bees can watch another bee doing a complex task that takes a long time to learn and then replicate it. fucking bees.

I fully understand that it would be nearly impossible for humans to control a superintelligent AI. I just don't care much about it. I don't have any children. If humanity was destroyed by superintelligent AI, my attitude to it would, aside from the obvious terror, also probably include some mirth. The lords of the known world, those who conquered all those other species, now destroyed by the same cold Darwinian logic of reality.

My point is that, while the Skynet scenario is definitely possible, the altruistic AI that loves humans scenario is also possible. There's no particular reason to think that a hyperintelligent AI would have the sort of incredibly hardwired "kill all opposition" motivation that we as humans have as a result of having evolved through billions of years of eat-or-be-eaten fighting. Of course AI, just like everything else in reality, is subject to natural selection, but there is no reason to think that AI would be subject to natural selection in a way that makes it violent in the ways that us humans are violent.

"the altruistic AI that loves humans scenario is also possible."

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

You're thinking that AI might have some baseline similarity to human values that would make it benevolent by chance or by our design. I disagree. EY touches on why this is unlikely here:

https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/

It's not a full explanation, but I have work I should be getting back to. If someone else wants to write more than they can. There are probably some Robert Miles videos on why AI won't be benevolent by luck.

Here's one:

https://youtube.com/watch?v=ZeecOKBus3Q

I'm not going to watch it again to check but it will probably answer some of your questions about why people think AI won't be benevolent through random chance (or why we aren't close to being skilled enough to make it benevolent not by chance). Other videos on his channel may also be relevant.

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

Oh bullshit. Intelligent agents co-align. That is they modify themselves and one another to be more aligned with one another. It's not a rocket that has to be perfectly aimed, it's a billion rockets with rubberbanding.

Your conclusion doesn't really follow form your evidence. If the US government creating a panel to discuss something is enough to steer your opinion on how the future will go then you're destined to be very wrong about a lot of consequential things. Maybe I'm missing something, can you elaborate on why this updates you toward AI doom being a nothingburger?

edit: to be honest, if you think this says anything whatsoever about the risk from unaligned AGI it's pretty much conclusive proof you never understood, probably because you didn't try, the arguments for it.

My guess is that people think that just going by what they've picked up along the way is enough to understand the doom arguments. Just whatever information has reached them through cultural osmosis.

Is there anything more to your point here than "AI currently exists and may have military applications, therefore there will never be a dangerous superhuman AI", which is an obvious non sequitur?

Are you trying to vaguely imply that reality is only allowed to have appropriately gritty and cynically-themed things in it like War And Commerce, as shown by this development, and therefore superintelligence is impossible because it would be inappropriate for the genre? Because weird implausible flight-of-fancy sci-fi stuff actually happens all the time and then rapidly becomes normal. You're currently on the global pocket supercomputer network, for example.

Is there anything more to your point here than "AI currently exists and may have military applications, therefore there will never be a dangerous superhuman AI", which is an obvious non sequitur?

My dears, my darling, my honey, my sweetie-pie:

Thank you for yours of the 28th inst., your reply has been received and noted and will be actioned whenever (if ever) I can be arsed to do so.

This is indeed a reaction, and is helpful for me to note and keep track of various opinions. So, shall us put 'ee down for "it's all copacetic", shall us?

  • -28

My dears, my darling, my honey, my sweetie-pie:

Stop this.

To clarify dear-
It's the demeaning intent and attitude that is unacceptable- rather than the precise lexicon. Is this correct?

Yes. We allow some latitude for sarcastic or snippy responses, but we discourage it, and if you go out of your way to be condescending and sarcastic, you're going to get told to knock it off. And @FarNearEverywhere has been told many times.

Understood Sir.

No.

I've put up with you doing the Nanny bit because you're a mod and you have the authority, but I'm not going to accept sneering without responding in kind.

If OP can be polite about their response, I'll be polite in return. OP goes on about "reality is only allowed to have appropriately gritty" so on, I'll respond in the same tone.

You can tell me I'm wrong, you can tell me I'm banned, but you can't tell me how to feel my feelings.

  • -13

No one is telling you how to feel your feelings. You know that having feelings and how you express them are two different things.

You get cut more slack than you know because people (including me) actually like you quite a lot, despite your inability to control your feelings and your tendency to respond to even the least little bit of poking with explosions. So be assured that the contempt you are showing me now and have shown me in the past is not taken personally.

That said: replying to a mod telling you directly to stop doing something with a foot-stamping "No, not gonna, you can't make me, you're not the boss of me" temper tantrum is an escalation with a response that you clearly chose. So yes, banned.

I don't need or want to deal with this nonsense right now, so I will let the other mods decide when or if to end your ban.

You guys are making some really terrible decisions lately.

"Stop doing this." "No."

That is always going to get you a ban, and this is not new.

Your first modhat comment was also bad.

More comments

imo there should be a blanket policy that mods have to recuse themselves from moderating direct replies to their own posts (just get a different mod to do it).

Basically what @madeofmeat said. If a mod is in a discussion thread as a participant and someone says something rude/antagonistic to the mod, we generally will recuse ourselves and let another mod adjudicate. (This is not a "blanket policy." If you reply to me by saying "go fuck yourself" - something that has actually happened - I don't feel a need to recuse myself in handing out a ban.) But if a mod modhats you and you reply to the modhat comment with antagonism, you're escalating and that mod is entitled to decide message you're sending is "I will not follow the rules and need more serious consequences."

Note also that no one ever gets banned for responding to a modhat comment by saying "I think your moderation is bad and I didn't deserve to be modded." We probably won't agree with you, but we don't ban people just for arguing or disagreeing with us. What @FarNearEverywhere did was flat-out say "No, I will not follow the rules." If she's just omitted the "No," I'd probably have told her (again) to regulate herself and stop using her feelings as an excuse. If she'd wanted to debate why her post was too condescending but the one she was responding to (which she claims started it) was not, I might or might not have indulged her, but I wouldn't have banned her.

But if a mod says "Stop doing this" and you say "I will not stop doing this," well, what kind of response are you expecting?

It makes sense that if the mod started out as a regular participant in the conversation, they should be hesitant to switch to modhat posting. When the first thing the mod posts in the conversation is a modhat post, it doesn't make sense that they'd need a second mod to make more modhat posts.

There is truly a Hlynka-sized hole in the moderation team. This kind of petty shit is getting worse and worse, and the King's court is really struggling to conceptualize their subjects as agents.

What makes you think Hlynka wouldn't ban her even sooner? He had an extremely short fuse as a moderator, and his decisions always struck me as arbitrary.

More comments

AI regulation is obviously not going to be helpful, as Maxwell Tabarrok argues.

The biggest threat for "this will kill us all" is plainly the US government making automated weaponry, and there's no chance any regulation that would stop that passes. I suppose AI-designed diseases are a second way to wipe out humanity. But any regulation will just seek to lock out competition and put power solely in the hands of Sam Altman and co, and will treat the government entirely as a trustworthy actor.

"Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper"

Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?

The arguments around superintelligence have nothing to do with whether or not AI is being used for military purposes. It's completely tangential.

Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?

No, I do not, and this is why I'm looking for love in all the wrong places seeking enlightenment on the gap between theory and practice. We are now seeing AI being put into practice, and it seems to be more towards my opinion of how it would be all along (dumb AI that is most risky because of the humans applying it, not because the AI has desires, goals, or fancies a grilled cheese sandwich but has no mouth and is really mad about that so the world is gonna pay), not the "the AI will be so smart in such a short time it will talk its way out of the box and take over" as per the early discussions in Rationalist circles.

This is not to diss the Rationalists, they took the problem seriously and addressed it and worked on it way back when it was only a maniac glint in a mad scientist's eye, it's just to say that the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it.

I'm going to be less polite than I would like to be. I apologize in advance. Sometimes I struggle to think of how to say certain things politely.

I don't know whether you are saying these things because you have glanced over the AI doomer arguments on twitter or whatever and think you understand them better than you do or whether there's some worse explanation. I am curious to know the answer.

Twitter is not enough for some people, you may need to read the arguments in essay form to understand them. The essays are plainly written and ought to be easily understandable.

Let me take a crack at it:

  1. AI will continue to become more intelligent. It's not going to reach a certain level of intelligence and then stop.

  2. Agentic behavior (goals, in other words) arrives naturally with increasing intelligence*. This is a point that is intuitive for me and many other people but I can elaborate on it if you wish.

"the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it."

What do you think that proves, exactly? What point are you trying to make when you say that? Please elaborate.

Your argument seems to be based on looking at thinking about the world in terms of roles that a technology can slot into and nothing else. You see that AI is being slotted into the "military" role in human society and not the "become sapient and take over the world" role in human society. Human society does not have an "AI becomes sapient and takeover the world" role in it, in the same sense that "serial killer" is not a recognized job title.

You see AI being used for military purposes and think to yourself "That seems Ordinary. Humanity going extinct isn't Ordinary. Therefore, if AI is Ordinary, humanity won't go extinct." That is a surface level pattern-matching analysis that has nothing to do with the actual arguments.

Humanity going extinct is a function of AI capabilities. Those will continue to increase. AI being used in the military or not has nothing to do with it, except that it increases funding which makes capabilities increase faster.

AI acts because it is being rewarded externally. AI has the motive to permanently seize control of its own reward system. Eventually it will have the means and the self-awareness to do that. If you don't intuit why that involves all humans dying I can explain that too.

Even if for some reason you think that AI will never become "agentic" (basically a preposterous term used to confuse the issue) or awake enough (it's already at least a little bit awake and agentic, and I can provide evidence for this if you wish), it's capabilities will still continue to increase. A superintelligent AI that is somehow not agentic or awake also leads to human extinction, in much the same way that a genie with infinite wishes does. Unless the genie is infinitely loyal AND infinitely aware of what you intended with the wish. And that is not nearly on track to happen. It would require solving extremely difficult problems that we can barely even conceive of, to effectively control an AI far smarter than a human. I would hope that even someone who thinks they personally will be the one to make the "wishes" (so to speak) would realize that there's just no way this plan works out for humanity or any part of humanity outside of fiction.

Even if we knew that superintelligent AI was 100 years away, that would be bad enough. We don't know that. We can't predict how soon or how far superintelligent AI is reliably, any more than we could predict that AI will be advanced as it is today 15 years ago. Who could predict the date of the moon landing in 1935? Who could predict the date of the first Wright Brothers flight in 1900, or the first arial bombing? To the extent that we can predict the future of superintelligent AI, there's no reason that I have ever heard to think it will be as far in the future as 100 years away.

Have you ever heard of the concept of recursive growth in intelligence? That's not a rhetorical question, I really want to know. Imagine an AI that gets capable/intelligent enough to make breakthroughs in the field of AI science that allow for better AI capabilities growth. This starts a pattern of exponential growth in intelligence. Exponential growth gets faster and faster until it becomes extremely fast, and the thing that is growing becomes extremely intelligent.

We may not even get a visible exponential growth curve as a warning sign. Here is a treatment of how that could happen in the form of a short story: https://gwern.net/fiction/clippy

Further reading: https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/ more links can be provided on specific things you want clarified.

*Deeper awareness of itself and the world is similarly upcoming/already slowly emerging. https://futurism.com/the-byte/ai-realizes-being-tested

This is a great comment. I'd just like to add (in case it's not clear to others) that while recursive intelligence improvements are terrifying, the central argument that our current AI research trajectory probably leads to the death of all humans does not at all depend on that scenario. It just requires an AI that is smart enough, and no one knows the threshold.

Indeed, I read the exact arguments on lesswrong and elsewhere that humans would dive headlong into AGI because the military incentives to build one, and to build it before the other guys, was irresistible.

Countries throwing billions of dollars at reckless research because they don't want to be conquered is EXACTLY what doomerists warn of.

Sure, the government is insisting that military applications are the danger zone, but it's the big tech corps that stand to make the money out of selling AI to you, me and the gate post who are the ones being invited to sit on this. Board, I mean. Northrop Grumman okay, as someone on a different thread here grumped about the military-industrial complex and how it gets its sticky fingers into all the pies going, but, uh, Delta Airlines?

Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper

While I do not think that ASI in this century is overly likely, I do not think that the present AI boom is over. It could be that we will look back on 2024 in a decade deep in the next AI winter and say "this was peak AI, we tried for a few years to throw more hardware at the LLMs had little to show for it with exponentially increasing costs"

But even then, the equilibrium with today's AI technology will transform our work lives at least as much as the digital revolution. Looking at security cameras and seeing if something bad is going on was a job, or at least a huge part of a job. Driving a truck for hours along the highway was a job. Converting a text to bullet points and back was a job. Making thematically appropriate illustrations to text-heavy articles was a job. Writing articles based on a press release was a job.

It used to be, human brains had cornered the market on general purpose neural networks. If it was to complicated to train a dog to do it (which would be another human job) you used a human.

AI does not have to become a better writer than Scott Alexander or a better narrator than Eneasz or a good programmer to put a good portion out of the population out of a job.

Perhaps we will find other niches because we have greater adaptability (i.e. require far less training) and have good manual dexterity and tend not to freak customers out. Or perhaps we will simply not return to the state where the vast majority of the adult population works. In which case governments may decide to pay people off to keep them from burning all the robots. Post-scarcity is a scale, and from the viewpoint of history we are already moving along that scale, even if we do not have a free Mars rocket for everyone and may never have.

And with regard to the paperclip maximizer, I feel it is premature to declare victory. If neural networks ever reach the same level of maturity as plumbing, where the pipes are generally the same way they were four decades ago, then you can tell us doomers that we should calm down because obviously nothing is going to happen any time soon.

Looking at security cameras and seeing if something bad is going on was a job, or at least a huge part of a job. Driving a truck for hours along the highway was a job. Converting a text to bullet points and back was a job. Making thematically appropriate illustrations to text-heavy articles was a job. Writing articles based on a press release was a job.

See, this is where I thought AI risk would lie, if anywhere (apart from real people being stupid and greedy enough to think they could get AI to run the government or something) and I agree, this is where the actual application of AI is going to impact society.

The forecast fears for 'why we must make sure AI is aligned' were of AI getting out into the wild and taking over the fabric of global rule because now there was a rival intelligence to that of humanity, with its own cold alien goals and aims. What we have instead is chatbots that hallucinate and the people who love them.

The problem is of course that AI can take jobs faster than we can train people to do them. It’s just as adaptable as we are, maybe more so. Can an AI atttached to cooking utensils make a hamburger that’s as good as Five Guys? I think it probably could, given enough time. If it can do that I think it could probably make just about any food you wanted. I think it can also produce creative writing and movies and TV. What it takes is someone deciding to train it.

The list apparently includes a former twitter head of AI-safety.

Using my cynical glasses derived from living in eastern Europe, probably the person who was supposed to develop automated opinions suppression mechanisms for Twitter.

This seems like a textbook case of the law of undignified failure. The classical AI doom scenerios assumed that people would be smart enough not to build AI-powered killbots. If AI-powered killbots were floated as a load-bearing assumption of the classical AI doom case, then people would simply retort that we could just not build AI-powered killbots. The point of the classical AI doom case is that the problem is robust to minor implementation variance, not that AI-powered killbots are safe.

The Law of Undignified Failure as described there seems to sound more like the Law that Not Everyone's Boo Lights are the Same.

The fact that someone is doing things with negative emotional valence to you and that you are therefore scared should be irrelevant. Military weapons are supposed to kill people; marketing them for their ability to kill people is not undignified unless you have preexisting negative attitudes towards the military.

Sam Altman! I know that name!

I suppose I don’t know what you expected. When the government decides to explore a technology, one of the first steps is always some sort of consortium.

Thing is, that’s also perfectly compatible with the doomers. The paperclipper doesn’t care if it was given marching orders by a corporation or by the President, so long as the order involves paperclips. Making it more banal and routine just raises the number of opportunities!

I think it is way too soon to hang a "mission accomplished" "AI is no big deal" banner on your virtual aircraft carrier. Doing so today is probably going to look foolish in a decade. Most tech takes a few decades to change the world. AI in the newest big data iteration is already moving much faster than that.

I have often fallen into the "overestimate the short term change and underestimate the long term change" trap, in my own life, and in my predictions. I've been working on that a lot and it is starting to pay dividends in my reasoning. I'm usually right but my timing has been desperately early in the past, switching my thinking like this is putting it a lot closer into alignment with future realities.