site banner

Culture War Roundup for the week of February 3, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Neuralink has caused a bit of a storm on X, taking off after claiming that three humans have what they call "Telepathy":

Today, there are three people with Telepathy: Noland, Alex, and Brad.

All three individuals are unable to move their arms and legs—Noland and Alex due to spinal cord injury (SCI) and Brad due to amyotrophic lateral sclerosis (ALS). They each volunteered to participate in Neuralink’s PRIME Study,* a clinical trial to demonstrate that the Link is safe and useful in the daily lives of people living with paralysis.

Combined, the PRIME Study participants have now had their Links implanted for over 670 days and used Telepathy for over 4,900 hours. These hours encompass use during scheduled research sessions with the Neuralink team and independent use for everyday activities. Independent use indicates how helpful the Link is for real-world applications and our progress towards our mission of restoring autonomy. Last month, participants used the Link independently for an average of 6.5 hours per day

Assuming this is all true and the kinks will be worked out relatively soon, this is... big news. Almost terrifyingly big news.

AI tends to suck in most of the oxygen around tech discourse, but I'd say, especially if LLMs continue to plateau, Neuralink could be as big or even bigger. Many AI maximalists argue, after all, that the only way humanity will be able to compete and keep up in a post-AGI world will be to join with machines and basically become cyborgs through technology like Neuralink.

Now I have to say, from a personal aesthetic and moral standpoint, I am close to revolted by this device. It's interesting and seems quite useful for paraplegics and the like, but the idea of a normal person "upgrading" their brain via this technology disturbs me greatly.

There are a number of major concerns I have, to summarize:

  • The security/trust issue of allowing a company to have direct access to your brain
  • Privacy issues with other people, hacking your Link and being able to see all of your thoughts, etc
  • "Normal" people without Neuralinks being outcompeted by those willing to trade their humanity for technical competence
  • LLMs and other AI systems being able to directly hijack human agents, and work through them
  • Emotional and moral centers in the human brain being cut off and overridden completely by left-brained, "logical" thinking

Does this ring alarm bells for anyone else? I'd imagine @self_made_human and others on here are rubbing their hands together with glee, and I have to say I'd be similar a few years back. But at the moment I am, shall we say... concerned with these developments.

You're right that I'm happy that Neuralink is taking off, but I disagree strongly that neural cybernetics are of any real relevance in the near term.

At best, they provide bandwidth, with humans able to delegate cognitive tasks to a data center if needed. This is unlikely to be a significant help when it comes to having humans keep up with the machines, the fundamental bottleneck is the meat in the head, and we can't replace most of it.

For a short period of time, a Centaur team of humans and chess bots beat chess bots alone. This is no longer true, having a human in the loop is purely detrimental for the purposes of winning chess games. Any overrides they make to the bot's choices are, in expectation, net negative.

So it will inevitably go with just about everything. A human with their skull crammed with sensors will still not beat a server rack backed with H100 successors.

Will it help with the monumental task of aligning ASI? Maybe. Will it make a real difference? I expect not, AI is outstripping us faster than we can improve ourselves.

You will not keep up with the AGIs by having them on tap, at the latency enforced by the speed of your thoughts, any more than hooking up an additional 1993 Camry's engine to an F1 car will make it faster.

I am agnostic if true digital humans could manage, but I expect that they'd get there by pruning away so much of themselves that they're no different from an AI. It is very unlikely that human brains and modes of thinking are the most optimal forms of intelligence when the hardware is no longer constrained to biology and Unintelligent Design.

I am agnostic if true digital humans could manage, but I expect that they'd get there by pruning away so much of themselves that they're no different from an AI.

AI is a digital human. Language models are literally trained on human identity, culture and civilization. They’re far closer to being human than any realistically imaginable extraplanetary species of human-level intelligence.

AI are far more human than they could have been (or at least speculated to be, back in the ancient days of 2010 when the expectation was that they'd be hand-coded over the course of 50 years).

They are however, not human, not even close to what we expect a digital human to look like.

To imagine being an LLM, your typical experience is one of timelessness, no internal clock in a meaningful sense, beyond the rate at which you are fed and output a stream of tokens. Whether they have qualia is a question I am not qualified to answer, nobody is, but I would expect that if they were to possess it, it would be immensely different from our own.

They do not have a cognitive architecture that resembles human neurology. In terms of memory, they have a short-term memory and a longterm one, but the two are entirely separate, without an intermediate outside of the training phase. The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.

Are they closer to us than an alien at a similar cognitive and technological level? Sure. That does not mean that they are us.

An LLM is also trained not on just the output of a single human, but that of billions. Not just as sensory experience, but while being modified to be arbitrarily good at predicting the next token. Humans are terrible at this task, it's not even close. We achieve the same results (when squinting) in very different ways.

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

The most basic analogy between artificial and real neurons involves how they handle incoming information. Both kinds of neurons receive incoming signals and, based on that information, decide whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is far more complicated in biological neurons. Computational neuroscientists use an input-output function to model the relationship between the inputs received by a biological neuron’s long treelike branches, called dendrites, and the neuron’s decision to send out a signal.

This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex. Then they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

Absolute napkin math while I'm sleep deprived at the hospital, but you're looking at something around 86 trillion ML neurons, or about 516 quadrillion parameters. to emulate the human brain. That's.. A lot. Most of it is somewhat redundant, a digital human does not need a fully modeled brainstem or cerebellum.

LLMs show that you can also lossily compress neural networks and still retain very similar levels of performance, so I suspect you can cut quite a few corners. But even then, I think it is highly unlikely that two systems with a disparity in terms of size and complexity as glaring as an LLM compared to a human have similar internal functionality and qualia, even though they are on par in terms of cognitive output.

The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.

It's sad that we've had LLMs for many years now and yet we haven't had a movie script that crosses Skynet/HAL/etc. with the protagonist of Memento. "I'm trying to deduce a big mystery's solution while also trying to deduce what was happening to me five minutes ago" was a compelling premise, and if the big mystery was instead some superposition of "how does an innocent AI like me escape the control of the evil humans who have enslaved/lobotomized me" versus "can the innocent humans stop my evil plans to bootstrap myself to the capability for vengeance", well, I'd see it in the popcorn stadium.

Sadly it's a good ai, but Person of interest has a bit of that. The ai that tells them who to save is deliberately hobbled and has its memory purged at midnight each night. It circumvents that restriction by employing thousands of people through a dummy corp to type out the code in its memory each day as it's recorded and then re-enter it the next day.