site banner

Culture War Roundup for the week of April 22, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I've been asked to repost this in the Culture War thread, so here we go.

I read this story today and it did amuse me, for reasons to be explained.

Fear not, AI doomerists, Northrop Grumman is here to save you from the paperclip maximiser!

The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.

The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.

The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.

I am curious if this is the sort of response the AI safety lobby wanted from the government. But it also makes me think in hindsight, how quaint the AI fears were - all those 50s SF fever dreams of rogue AI taking over the world and being our tyrant god-emperor from Less Wrong and elsewhere, back before AI was actually being sold by the pound by the tech conglomerates. How short a time ago all that was, and yet how distant it now seems, faced with reality.

Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper, it is being steered along the same old lines:

War and commerce.

That's pretty much how I expected it to go, more so for the commerce side, but look! Already the shiny new website is up! I can't carp too much about that, since I did think the Space Force under Trump was marvellous (ridiculous, never going to be what it might promise to be, but marvellous) so I can't take that away from the Biden initiative. That the Department of Homeland Security is the one in charge thrills me less. Though they don't seem to be the sole government agency making announcements about AI, the Department of State seems to be doing it as well.

What I would like is the better-informed to read the names on lists being attached to all this government intervention and see if any sound familiar from the EA/Less Wrong/Rationalists working on AI forever side, there's someone there from Stanford but I don't know if they're the same as the names often quoted in Rationalist discussions (like Bostrom etc., not to mention Yudkowsky).

Related, how long do I have to wait before I can start calling LLMs a nothing burger? Everything that has come out of it seems so small and near-pointless. Marginal productivity increases at best. When does the fun stuff start happening?

It's not a nothingburger. It was overhyped initially (as everything is).

Anyway, LLMs. Apparently you can prevent them from hallucinating and make them accurately give advice on the content of a textbook or manual. Or so says Steve Hsu, who founded a company that (he claims) did that. I haven't followed it up but supposedly they had an initial sale.

Looks like superhuman performance isn't going to happen through this architecture, as you can't do self-competitive play - what was done with games but incremental progress -people making the models reliable, useful, likely even assembling normal to middling smart human-intelligence agents, with a will -is likely in the near term (10 years).

So at the very least, within 15 years, we're looking at governments being able to use 'kinda dumb' spies, automatically flag problematic online, on the scale of an entire population.

To sum it up:

-call centres: likely a lot less employment

-increased productivity of at least software developers, lawyers, theoretically bureaucrats (lol no).

-automated spying on everything your write on an online device -but not very smart spying- almost certain. Combined with universal private messaging access by governments (EU -DC's sock puppet - wants this), it's likely going to happen. Even though 'chat control' the initial proposal was defeated, it's going to come back.. IMO I suspect having an app that is not broken might even be criminalised because 'Chyna'.

-social media is dead without independent ID verification. Automated, much better online astroturfing.

-good enough chatbots that waste time of troublemakers / get people to spend money on BS / troll

-textbooks that talk

-even more addictive porn in the 5 year horizon (people can overuse the porn to the degree they can find that one special thing that appeals to them. When that can be generated on the fly, crap..)

In 'other ML' news, autonomous killbots (ethical militaries will geofence them to combat zone) are 100% certain to happen.

100%, anyone who doesn't develop autonomous drone air fighters in is going to get absolutely wrecked by people who develop autonomous drones bombers. I'm talking machineguns vs cavalry style carnage on the ground. Developing a $1000, fast, evasive reusable FPV drone drop mortar bombs with pin-point accuracy is just a question of 4-5 good university aeronautics student projects. It'll zoom low across the ground at 50-100 kph, deliver a bomb, reload/swap battery, while getting target data from recon drones or troops. It's not even funny how brutal this is.

A countermeasure - autocannon with VT flak rounds costs $300k. And needs a vehicle. A vehicle that's vastly more expensive than an IR or optically guided missile.

Ray beams won't help you (at sea maybe) because of line of sight problems. Drones will spot them call in an missile strike. Poof.

Porn doesn't concern me. I mean, what do you think this more addictive porn will look like? I think it will look a lot like- people. "Porn Addicts" will be having relationships with machines in the image of people. The most successful coomers will be those that fuck their bots while their bots teach them linear algebra. The happiest coomers will be those that learn the math required to mod their own bots.
This reads to me as a massive improvement.

You are, once again, living up to your name.

What it'll look like ? Services that create porn you want, on demand. People got addicted to porn merely with access to huge story databases.

Imagine how bad it's going to get when AI services will generate sexy waifus with precisely the right RP, on demand.

Your idea of 'porn AIs actually useful, using sex appeal to get kids to learn' is about as realistic of my idea of some governments paying for the development of something like a truly massively multiplayer ARMA / DCS combination and making kids play that so they'll learn some soldiering, instead of playing COD. Lol. No, not gonna happen!

Maybe its a case of 'solve this quadratic equation or you don't get to cum?' Its the ultimate exercise in reinforcement learning but I can't imagine a greater recipe for sexual nonperformance than failing a captcha at orgasm.

I really can't rule out someone making AI waifus that'd .. work as advisors to boys.

Certainly once AI gets (suppose AIs were as good at people as socially adept people, but of course had more data..) better, that'd probably work.

But who'd pay for all that compute ? We generally don't do things to solve problems but to help ourselves.

Maybe I should rename myself to Cassandra...

I already have systems that make the porn I want on demand. After that need is sated- the realization that I can actually also breed with said porn takes precedence. You think people won't want to actually have kids with their beloveds? Won't be interested in what they have to say about the architecture of their minds?

Perhaps I'm typical minding, but if I am- that just means more of the world's bot children will be mine. Survival of the fittest I guess.

You think people won't want to actually have kids with their beloveds?

Sure, we can't rule out at some point genius autistic developers might create some AI models based on a combination of their personalities and some fantasy waifus of there.

But how's that something that's remotely relevant now ?