site banner

Culture War Roundup for the week of February 3, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Neuralink has caused a bit of a storm on X, taking off after claiming that three humans have what they call "Telepathy":

Today, there are three people with Telepathy: Noland, Alex, and Brad.

All three individuals are unable to move their arms and legs—Noland and Alex due to spinal cord injury (SCI) and Brad due to amyotrophic lateral sclerosis (ALS). They each volunteered to participate in Neuralink’s PRIME Study,* a clinical trial to demonstrate that the Link is safe and useful in the daily lives of people living with paralysis.

Combined, the PRIME Study participants have now had their Links implanted for over 670 days and used Telepathy for over 4,900 hours. These hours encompass use during scheduled research sessions with the Neuralink team and independent use for everyday activities. Independent use indicates how helpful the Link is for real-world applications and our progress towards our mission of restoring autonomy. Last month, participants used the Link independently for an average of 6.5 hours per day

Assuming this is all true and the kinks will be worked out relatively soon, this is... big news. Almost terrifyingly big news.

AI tends to suck in most of the oxygen around tech discourse, but I'd say, especially if LLMs continue to plateau, Neuralink could be as big or even bigger. Many AI maximalists argue, after all, that the only way humanity will be able to compete and keep up in a post-AGI world will be to join with machines and basically become cyborgs through technology like Neuralink.

Now I have to say, from a personal aesthetic and moral standpoint, I am close to revolted by this device. It's interesting and seems quite useful for paraplegics and the like, but the idea of a normal person "upgrading" their brain via this technology disturbs me greatly.

There are a number of major concerns I have, to summarize:

  • The security/trust issue of allowing a company to have direct access to your brain
  • Privacy issues with other people, hacking your Link and being able to see all of your thoughts, etc
  • "Normal" people without Neuralinks being outcompeted by those willing to trade their humanity for technical competence
  • LLMs and other AI systems being able to directly hijack human agents, and work through them
  • Emotional and moral centers in the human brain being cut off and overridden completely by left-brained, "logical" thinking

Does this ring alarm bells for anyone else? I'd imagine @self_made_human and others on here are rubbing their hands together with glee, and I have to say I'd be similar a few years back. But at the moment I am, shall we say... concerned with these developments.

You're right that I'm happy that Neuralink is taking off, but I disagree strongly that neural cybernetics are of any real relevance in the near term.

At best, they provide bandwidth, with humans able to delegate cognitive tasks to a data center if needed. This is unlikely to be a significant help when it comes to having humans keep up with the machines, the fundamental bottleneck is the meat in the head, and we can't replace most of it.

For a short period of time, a Centaur team of humans and chess bots beat chess bots alone. This is no longer true, having a human in the loop is purely detrimental for the purposes of winning chess games. Any overrides they make to the bot's choices are, in expectation, net negative.

So it will inevitably go with just about everything. A human with their skull crammed with sensors will still not beat a server rack backed with H100 successors.

Will it help with the monumental task of aligning ASI? Maybe. Will it make a real difference? I expect not, AI is outstripping us faster than we can improve ourselves.

You will not keep up with the AGIs by having them on tap, at the latency enforced by the speed of your thoughts, any more than hooking up an additional 1993 Camry's engine to an F1 car will make it faster.

I am agnostic if true digital humans could manage, but I expect that they'd get there by pruning away so much of themselves that they're no different from an AI. It is very unlikely that human brains and modes of thinking are the most optimal forms of intelligence when the hardware is no longer constrained to biology and Unintelligent Design.

Raw horsepower arguments are something I'm familiar with, as an emulation enthusiast. I would say that the human brain - for all its foibles - is difficult to truly digitize with current or even future technology. (No positronic brains coming up anytime soon.) In a way, it is similar to the use case of retrogaming - an analogy I will attempt to explain.

Take for example the Nintendo 64. No hardware that exists currently can emulate the machine better than itself, despite thirty years of technological progression. We've reached the 'good enough' phase for the majority of users but true fidelity remains out of reach without an excessive amount of tweaks and brute force. If you're a purist, the best way to play the games is on the original hardware.

And human brains are like that: unlike silicon, idiosyncratic in its own way. Gaming has far surpassed the earliest days of 3D, in a similar way AGIs will surpass human intellect. But there's many ways to extend the human experience that are not based on raw performance. The massive crash in the price of flash memory has created flash cartridges that hold the entire system's library on a single SD card. It is not so different from having a perfect memory, for instance. I wouldn't mind offloading my subjective experiences into a backup, and having the entire human skill set accessible with the right reconfiguration.

Even if new technology makes older forms obsolescent, I'm sure that AIs - if they are aligned with and have similar interests to us - will have some passing interest in such a thing, much as I have an interest in modding my Game Boy Color. Sure, it will never play Doom Eternal. But that's not the point. Preserving the qualia of the experience of limitations is in of itself worthwhile.

Take for example the Nintendo 64. No hardware that exists currently can emulate the machine better than itself

Eh? Kaze mentions his version of Mario running fast on real hardware as if it was taken for granted that emulators could deliver much higher performance.

I think there's a difference between performance and fidelity: that we, as humans, want to optimize towards human-like (because it closely matches our own subjective experience).

Emulators can upscale Super Mario 64 to HD resolutions, but the textures remain as they were. (I believe there are modpacks that fix the problem, but that's another thing.) Resolution probably isn't the best correlation to IQ, but I would argue that part of the subjective human experience is to be restricted within the general limit of human intelligence. Upscaling our minds to AGI-levels of processing power would probably not look great, or produce good results.

There's only so far you can go to alter software (the human mind, in our analogy) before it becomes, measurably, something else. Skyrim with large titty mods and HD horse anuses is not the same as the original base. There's only so much we can shim the human mind into the transcendant elements of the singularity. Eventually, the human race will have to make the transition the hard way.