site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

OpenAI researchers warned of AI breakthrough before CEO ouster according to Reuters. It seems that, disappointingly, there's more to the Sama exit than just petty politics.

I had found myself greatly reassured by the thought that, actually, this whole debacle was just (human) politics as usual - and not the eerie dawn of some new era.

Have other motizens noticed a substantial disconnect between their foremost worry the past while, and that of the normies in their life? Everyone else is chanting for Palestine, and I'm chanting sotto voce for a decade or two more of human supremacy before the singularity. And anytime I could comfort myself by the thought that, well, Serious People are not yet concerned, I see some preposterous headline from selfsame Serious People about how hillwalking is white supremacy, or equivalent bullshit. The illusion is bollocked.

Hmm this sounds alarming, I wonder what the new capability was, it must be something very powerful and dange...

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Oh.

Truth is, 90% of all work is stupid. The difference between a committee of competent Harvard grads from every major (smart and competent, but no genius) and the kind of people who create true innovation is a couple of orders of magnitude.

AI might be around the corner, but super-human intelligence that can innovate (Neumann, Terence Tao) is much much much farther away than we think.

Truth is, 90% of all work is stupid.

This strikes as a very "I work in academia and so does everyone I know" type of take.

I work in academia and so does everyone I know

I have been fortunate to be surrounded by people much smarter than me, but academia style snark was central to me not doing a phd. Thanks for calling me out. Admittedly, my comment came off as snarky. I should rephrase it.

Some examples: Most middle manager jobs don't help in any realistic way. Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it. Most musicians/artists do not produce anything other than shallow imitations of their heroes. Most STEM trained practioners act more as highly-skilled monkeys who imitate what they are taught with perfect precision. Hell, even most R1 researchers spend most of their time doing 'derivative' research that is more about asking the most obvious question than creating something truly novel.

There is nothing wrong with that. I respect true expertise. It needs incredible attention to detail, encyclopedic knowledge of all edge cases in your field and a craftsman's precision. However, if a problem that needs just those 3 traits could be done badly by an AI model in 2010...... then it was going to be a matter of time before AIs became good enough to take that job. Because they were already recognized to be solvable problems, the hardware and compute just hadn't caught up yet. These jobs are stupid in the same way sheep herding for a Collie is hard or climbing a mountain as a goat is stupid. They are some combination of the 3 traits I mentioned above, performed masterfully. But, the skills needed can all be acquired and imitated.

That is the sense in which I say 90% jobs are stupid. Ie, given enough time, most average humans can be trained to do 90% of average jobs. It takes a couple of order-of-magnitude more time for some. But the average human is surprisingly capable given infinite time. In hindsight, stupid is the wrong word. It's just that when expressed like that, they don't sound like intelligence do they. Just a machine made of flesh and blood.

Here is where the 'infinite time' becomes relevant. AIs do actually have infinite time. So, even if the model is stupid in 'human time', it can just run far more parallel processes, fail more, read more & iterate more until it is as good as any top 10% expert in whatever they spend these cycles on.

Now coming to what AIs struggle to do, let's call that novelty. I believe there are 3 kinds of true novelty : orthogonal, extrapolative and interpolative. To avoid nerd speak here is how I see it :

  • Interpolative - Take knowledge from 2 different fields and apply they together to create something new.
  • Extrapolative - Push the boundaries within your field using tools that already exist in that field, but by asking exploratory what-if questions that no one has tried yet.
  • Orthogonal - True geniuses are here. I don't even know how this operates. How do you think of complex numbers. How do you even ask the 'what if light and matter are the same' kind of questions ? By orthogonal, I mean that this line of inquiry is entirely beyond the plane of what any of todays tools might allow for.

The distinction is important.

To me, Interpolative innovation is quite common and honestly, AIs are already starting to do this sort of well. Mixing 2 different things together is something they do decently well. I would not be surprised if AIs create novel 'interpolative' work in the near near future. It is literally pattern matching 2 distinct things that looks suspiciously similar. AIs becoming good at interpolative innovation will accelerate what Humans were already doing. It will extend our rapid rise since the industrial revolution, but won't be a civilizational change.

Models have yet to show any extrapolative innovation. But, I suspect that the first promising signs are around the corner. Remember, once you can do it once , badly, the floodgates are open. If an AI can do it even 1 in a million times, all you need is for the hardware, compute and money to catch up. It will get solved. When this moment happens is when I think AI-security people will hit the panic button. This moment will be the trigger to super-human hood. It will likely eliminate all interesting jobs, which sucks. But, to me, it will still be recognizable as human.

I really hope AIs cant perform Orthogonal innovation. To me, it is the clearest sign of sentience. Hell, I'd say it proves super-human sentience. Orthogonal innovation often means that life before-and-after it is fundamentally different to those affected by it. If we even see so much as an inkling of this, it is over for humans. I don't mean it's over for 99% of us. I mean, it is over. We will be a space faring people within decades, and likely extinct in a few decades after.

Thankfully, I think AI models will be stuck in interpolative land for quite a while.

(P.S : I am vey sleep deprived and my ramblings are accurately reflecting my tiredness sorry)

Necroing this due to AAQC, but have you had any luck getting GPT-style AI to do good interpolation? I've tried, but it doesn't like bridging fields very much - you really have to push it and say 'how might this narrow sub-field be relevant to my question', otherwise you just get a standard google summary.

Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it.

"Stupid" is not the same thing as "useless". Sure, a plumber crawling around in the attic looking for a tiny leak in a pipe may be something 'stupid' that could be better off automated, but when you have water running down your walls, you'll be glad of the 'stupid' human doing the 'stupid' job fixing the problem.

Most middle manager jobs don't help in any realistic way

I think this is frequently overstated. A good manager really does coordinate and organise and make decisions about who is working on what, what the requirements are, and the technical workers and product suffer if that work is not done.

Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it.

No, getting robots to do manual labour is super difficult. Sensing and accurately moving in the physical world is still well out of reach for many applications.

Most STEM trained practioners act more as highly-skilled monkeys who imitate what they are taught with perfect precision

Well, not quite, we actually solve problems, usually in the form of "how can I meet the requirements in the most efficient way possible". Sure, we're not usually breaking new innovative ground, but it's actually work, and it's not stupid. I write embedded software for controlling motors. These motor controllers are used in industrial applications all over the world, from robots to dentist drills.

That is the sense in which I say 90% jobs are stupid. Ie, given enough time, most average humans can be trained to do 90% of average jobs.

That's a stupid definition of stupid jobs.

Stupid because given enough time most average humans can be trained to recognise it, or stupid like this question?