site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

OpenAI researchers warned of AI breakthrough before CEO ouster according to Reuters. It seems that, disappointingly, there's more to the Sama exit than just petty politics.

I had found myself greatly reassured by the thought that, actually, this whole debacle was just (human) politics as usual - and not the eerie dawn of some new era.

Have other motizens noticed a substantial disconnect between their foremost worry the past while, and that of the normies in their life? Everyone else is chanting for Palestine, and I'm chanting sotto voce for a decade or two more of human supremacy before the singularity. And anytime I could comfort myself by the thought that, well, Serious People are not yet concerned, I see some preposterous headline from selfsame Serious People about how hillwalking is white supremacy, or equivalent bullshit. The illusion is bollocked.

If it was anywhere even near sentient AI then the Feds would have taken over by now. No, I don't mean 1 random DC strategist on the board. I mean that OpenAI's network would have been air-gapped and massive gag orders would have been placed on anyone. No multicolored twitter hearts. OpenAI, for all their generation defining technology, still has a rather spotty record of crying wolf when it comes to sentient AI. I don't think this one is any different.

But, but , but ......... it is likely that they have stumbled upon another step change improvement over GPT4, which likely means they can destroy another few hundred startups, businesses and careers.

It wouldn't take too much to make all but the top 10% of the following jobs obsolete:

  • Translators
  • Data Analysts
  • Simple CRUD backend makers
  • Simple Form/static front end makers
  • Generic Consultants
  • Virtual Doctors

Note, the biggest issue with Agents has been that they lose context part way through that process or meander. But all current agent architectures are super-naive when compared to the kind of swarm-RL stuff that has been out for a good decade. With GPT-4 Turbo 128 they have effectively solved all RAG, which allows it to pretty much surf the entire internet without meandering for a lot longer. Thus making its intelligence upto-date and functionally infinite.

My guess is that they have managed to fully stabilize agents for certain usecases and are fairly sure that they can deploy 'robot employees' for certain jobs in the span of a year.

But I might be wrong.


If it is just better MCTS, slightly better RAG and better GPT-4 RLHF then I will be soo disappointed. Yes, it is much better, but honestly, it speaks more to Google's incompetence and Facebook's complete not-giving-a-fuck for OpenAI to build up this kind of lead. None of this is fundamentally novel.

We are in an era of free-lunch where people think OpenAI are the best around just because everyone around can barely walk without tripping over themselves. (I say this as someone who still considers OpenAI to be the best applied engineering team assembled since Xerox Parc)

I too considered OpenAI to be the gold-standard, but I was astounded to find that the recently-released Assistant API maintains state, but with minimal/zero synthesis. Thanks to testers (@self_made_human) -- I learned quite quickly that some sort of synthesis is necessary -- to do anything more sophisticated than "search" requires a cognitive architecture that remembers and forgets.

https://drive.google.com/file/d/17u4X8O_2TxZXZRv_P7x-177aMzigrJmJ/view?usp=drive_link https://drive.google.com/file/d/185oaULl_29F9-mXQ420KQIKq_rU_crpv/view?usp=drive_link https://drive.google.com/file/d/184szy3fS4PmFF_D1Ock_NNrxCeg-sFV4/view?usp=drive_link

Hmm, I think editing in usernames doesn't ping them as usual, but either way I'm happy to have helped!

It wouldn't take too much to make all but the top 10% of the following jobs obsolete:

This may be theoretically true but strikes me as much too optimistic.

I use AI tools all day every day and continuously have my mind blown and say "this is going to change everything" to myself and to my wife if she's not tired of listening to me, but whenever I talk to other people, even other technology professionals and hear them tell me how they don't find this useful, I become resigned to the fact that it's going to take decades for the AI tech we have now to permeate the rest of industries. Just as it took decades and a fucking pandemic before we began to accept remote work as a viable way to function (though perhaps not optimal) even though people who were hip have been doing it since the late 90s.

It wouldn't take too much to make all but the top 10% of the following jobs obsolete

I think we're going to get the worst of all worlds there. Take translators - companies and even government offices like the justice system will turn to using machine translation instead of human translators because cheaper! faster! but there will be a lot of nuance lost (I think things like slang, simile, etc. will not be understood but we'll get literal translation) and even inaccuracies. Not much comfort there to the guy who gets convicted of a crime because the AI translation buggered up his statements to the cops and in court, but hey it saves $$$$ to the taxpayer (allegedly).

Same on down the line. Virtual doctors that don't order tests because they are set to "cheapest, most generic diagnosis" and miss out on the rare but does happen case where this time it is a zebra not a horse. Being able to pay for a real live human doctor is going to be the next division between "huh how come rich people live longer? 'tis a mystery!" classes.

You are assuming feds are competent.

They aren't.

There are plenty of highly competent people in the USG at senior levels.

Yeah- Like Jake Sullivan you mean?

EDIT: maybe perhaps be more specific. Which person is competent and at what ? Because the record of past 30 years is somewhat dismal.

If it was anywhere even near sentient AI then the Feds would have taken over by now.

My impression from Zvi's infodumps is that the NatSec crowd is kinda sleeping on AI. I imagine a rogue-AI incident would more than suffice to wake them up, but that's no good if it kills us.

I think CIA people (Will Hurd), RAND people (Tasha McCauley) and Georgetown people (Helen Toner) on the board of OpenAI were keeping them informed at least a little bit, but who knows how they'll do now!

NatSec isn't "sleeping on AI" so much as they've concluded that LLMs are an evolutionary dead-end for the use cases they have in mind.

Which is a form of sleeping on AI; they see it only as a tool, not as a potential adversary in its own right. Like I said, though, a rogue-AI incident would definitely fix that; a lot of my !doom probability routes through "we get a rogue AI that isn't smart enough to kill us all, then these kinds of people force the Narrative into Jihad".

they see it only as a tool, not as a potential adversary in its own right.

What use cases do you think I'm referring to?