site banner

Culture War Roundup for the week of February 3, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Neuralink has caused a bit of a storm on X, taking off after claiming that three humans have what they call "Telepathy":

Today, there are three people with Telepathy: Noland, Alex, and Brad.

All three individuals are unable to move their arms and legs—Noland and Alex due to spinal cord injury (SCI) and Brad due to amyotrophic lateral sclerosis (ALS). They each volunteered to participate in Neuralink’s PRIME Study,* a clinical trial to demonstrate that the Link is safe and useful in the daily lives of people living with paralysis.

Combined, the PRIME Study participants have now had their Links implanted for over 670 days and used Telepathy for over 4,900 hours. These hours encompass use during scheduled research sessions with the Neuralink team and independent use for everyday activities. Independent use indicates how helpful the Link is for real-world applications and our progress towards our mission of restoring autonomy. Last month, participants used the Link independently for an average of 6.5 hours per day

Assuming this is all true and the kinks will be worked out relatively soon, this is... big news. Almost terrifyingly big news.

AI tends to suck in most of the oxygen around tech discourse, but I'd say, especially if LLMs continue to plateau, Neuralink could be as big or even bigger. Many AI maximalists argue, after all, that the only way humanity will be able to compete and keep up in a post-AGI world will be to join with machines and basically become cyborgs through technology like Neuralink.

Now I have to say, from a personal aesthetic and moral standpoint, I am close to revolted by this device. It's interesting and seems quite useful for paraplegics and the like, but the idea of a normal person "upgrading" their brain via this technology disturbs me greatly.

There are a number of major concerns I have, to summarize:

  • The security/trust issue of allowing a company to have direct access to your brain
  • Privacy issues with other people, hacking your Link and being able to see all of your thoughts, etc
  • "Normal" people without Neuralinks being outcompeted by those willing to trade their humanity for technical competence
  • LLMs and other AI systems being able to directly hijack human agents, and work through them
  • Emotional and moral centers in the human brain being cut off and overridden completely by left-brained, "logical" thinking

Does this ring alarm bells for anyone else? I'd imagine @self_made_human and others on here are rubbing their hands together with glee, and I have to say I'd be similar a few years back. But at the moment I am, shall we say... concerned with these developments.

Let's disambiguate reality from science fiction here. Neuralink's implant is indeed a cool breakthrough that, with much training, allows a person to control a cursor without the use of arms or legs. This is very cool for people who can't use arms or legs, or much of any other practical use for the electrical signals going down the spinal cord.

The Neuralink's Telepathy (TM) is completely one-way: the device is reading the electrical signals in your spinal cord, and trying to interpret them as simple cursor commands. It does not send you secret messages that your brain magically decodes. It does not read any part of your mind. It doesn't know which thoughts produced the particular configuration of electrical signals, and what if felt like to have those thoughts. It doesn't know or care whether, to generate the signal that it interprets as "left-click", you had to visualize yourself naked dancing on the piano, or imagine yourself shitting. You do whatever works.

For the able-bodied among us: we have far-superior telepathy (not TM) of amazingly fine-tuned control of arms and legs. We have the amazing telekinetic (not TM) ability of moving stuff with those arms and legs. How much of that control would you be willing to sacrifice, to devote some of the electrical signal going through your spinal cord to an external device? For what purpose?

And why would you have to sacrifice any of that? They're not cutting out chunks of your brain to make a neuralink fit, the skull, while compact and rather packed, can fit a neat little cap of microelectrodes without much issue. We've stuck far larger things in there, see how massive Utah Arrays are in comparison.

In an able-bodied person, a BCI would be a significant augment. You would be able to control pretty much any electronic device at the speed of thought, bandwidth allowing.*

You would, if feedback was implemented (and there is no reason to think that doesn't work, because we have that already), be able to receive nearly arbitrary input too. Do you want to perceive the magnetic field around you? No biggie. Do you want to smell wifi signals? Why not?

I can hardly stress how life-changing being able to interact with digital domains at the speed of pure thought would be. No more typing, to say the least.

And since electromagnetic radiation can jump distances significantly larger than the few microns separating neuronal junctions, you would be able to control and sense just about anything, just about anywhere that latency allows.

If you think the brain can't handle nearly arbitrary sensory modalities, you'd be wrong again. They taught blind people to see by putting sensors on their tongues. Neuroplasticity is a helluva drug, especially when the BCI works in a feedback loop.

*You're not restricted to just electronics. Throw in another BCI at the receiving end, and you can control biological entities. I could move your hand with as much ease as I move mine.

OK, let's focus on the use-of-tongue-for-sight. How many hours a day are you, personally, willing to spend in wearing a device that's exactly like BrainPort but geared for detecting ultra-violet light?

Basic humans don't see ultra-violet, but bees and birds do; flora and fauna have evolved to incorporate ultra-violet signals. Wouldn't you like to experience this aspect of the world directly? All you have to do is wear some specialized glasses with a specialized ultra-violet-light camera on the bridge of the nose, connected to a hand-held base unit with CPU and some zoom controls, which in turn is connected to a lozenge stuck to your tongue. You train yourself for a while, figuring out what those funny electrical-shock feelings on your tongue correspond to. I guess you'd need to use some kind of visualization on the monitor, with artificial coloring to highlight the ultra-violet. And after a while--yay!--you can "sense" ultra-violet!

Or, you know, you could just look at those visualizations with artificial coloring, like the rest of us basic humans, and skip the wearing of glasses connected to a hand-held unit connected to the lozenge on your tongue.

BrainPort is a big deal for blind people, because so much of our human infrastructure depends on sight. Similarly, a bee might be utterly lost without that ultraviolet sense, but just how crucial is it for me to see it, and if I have any technology able to sense it, wouldn't I just use that instead of wiring myself up to some gear and retraining my brain?

What about a BrainPort device that's geared towards infra-red? Wouldn't that be cool, see the world like Predator? Or, again... why not just put on some infra-red goggles?

Why in heaven's name would I want to sense WiFi? Isn't in enough that my WiFi-enabled devices do that?

You mistake my presentation of examples of the brain being highly plastic in regards to new sensory modalities as a claim that future advances will be nearly as crude.

The number of people with adequate vision who would be happy holding a lollipop in their mouth for the purposes of redundant visual input is rather minimal.

But what if you could have minimal LIDAR embedded on you? They're small enough to throw into an iPhone. Then, with your eyes closed, you have a sensory perception of everything within your proximity. Behind, above, below. It doesn't matter. That is a strict improvement over normal vision.

Or the ability to hear infrasound and ultrasound. You might hear machinery failing or an impending earthquake before your mundane senses catch up.

Thermal senses? You'll know if your coffee is hot, whether food is done, whether that saucepan is safe to take off the stove. Whether an industrial accident is safe for humans to approach, or if a firefighter can take the risk of opening a door while in the confines of a flame-retardant suit.

I might be able to tell a patient was coming down with a fever just by glancing at them.

What about a BrainPort device that's geared towards infra-red? Wouldn't that be cool, see the world like Predator? Or, again... why not just put on some infra-red goggles?

To avoid the inconvenience of standard infra-red goggles or night vision devices. When you put them on, you're sacrificing standard vision in the process. Of course, for a soldier or hunter at night, their normal vision was already inadequate.

At any rate, there are oodles of useful information in the environment that humans don't have access to, but we can observe animals benefiting from. Magnetoception, or an internal sense for GPS, and you'll never get lost again.

Its obvious that humans can already do most of these things through utility devices. A BCI makes that connection more seamless, with lower latency, with reduced cognitive overhead from translation into the sensory modalities we are used to handling.

Eventually, our environment will shift to accommodate this, as the modern world has rapidly pivoted to taking things like automobiles, smartphones and omnipresent internet for granted. Someone in 1890 was not suffering because he didn't have the non-existent internet. You would, without it.

Eventually, cybernetics will be additive and not subtractive or substitutes. Right now, a cybernetic eye is only useful to someone who has visual issues (NVGs are examples of visual augments, though a purist would say they need to be directly hooked up to your brain, or else a car is a prosthetic leg).

If an augment seems useless or not worth the hassle, don't use it! You might not need magnetoception, but a hiker in an area with spotty signal or GPS jamming might. You might not need in-built LIDAR, but a soldier afraid of FPV drones or someone working in close proximity to industrial machinery might benefit.

BCIs just hold the promise to liberate us from interfacing with our tech through sight, sound, touch and so on.

You highlight it neatly here: such upgrades only really seem worth it if you're working an information-intensive job, the kind where you'd ordinarily be using some sort of sensor device or array. And modding yourself for something as ephemeral as a job feels excessive/vaguely droneish.

It's a given that we live in environments that are amenable to the barely upgraded baseline human form, since we adapted them to match our needs.

Speaking broadly, humans already go through intensive cognitive and behavioral modification for the purposes of a job. That's called school, college and uni! In physical terms, most jobs require us to either personally move a ton or two of steel and plastic, or ride as passengers in one. You might need glasses to correct vision, or more rarely cosmetic procedures if the job implicitly demands it.

I doubt humans will be forced to make ourselves into cyborgs for the purposes of labor, but only because even with these augmentations we would not be cost competitive with AI systems running industrial robots. I've elaborated further downthread on why I think hoping for humans to keep up with the machines is a forlorn hope. Imagine being asked to improve a monkey to the point it can be an accountant. By the time you're done, it's not really a monkey anymore, and in all likelihood the additional hardware you need to make a normal monkey fit for the job would be capable of doing it by themselves.

That being said, I am a transhumanist, and I will eagerly embrace cybernetics where the benefits outweigh the risk, or simply for the sake of improving my body. Getting legs that let me run faster than Usain Bolt won't make me dispense with a car, but I think they'd be handy. A good BCI would make most portable electronics like phones or laptops redundant, assuming you were happy using it as a wireless link to some kind of computational hardware. I imagine that if you want your compute close at hand, you'd carry around something the size of a USB powerbank in your pocket, and potentially even just to keep your internal hardware charged up.

Somewhat unrelatedly, have you seen Wildbow's Seek? As a transhumanist the themes and setting might be up your valley. One of the protagonists is a cyborg heavily adapted for tight spaces and low-gravity maintenance work.

I've only read (most of) Worm.

But funny you should mention that, because I write a novel where cyborgs are a mainstay (the protagonist is humanoid, but only because he hasn't been pushed further) , and the upcoming chapter has one who is basically a pair of frontal lobes in a crab-shaped shell.

You'll see clear Worm inspiration in my work, though I aim for much more of what I perceive as 'realism' in terms of societal and governmental reaction than Worm does with its desire to have the protagonists punch people on the streets. (I'm aware of in-universe justifications, I find them lacking)

Wildbow doesn't get nearly as wild as I do.

More comments