site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

Inferential Distance part 2 of ?: Minsky's Marvelous Minutia, or why I'm bearish on GPT

This post is a continuation of / follow up to my post on Inferential distance from a month ago, inspired by the recent discussions of GPT-4 and @ymeskhout's comments on prosecutorial immunity. I also feel like this might end up turning into a series, hense the "part 2" and the question mark.

Two things that came up in that previous conversation were a) the apparent differences between thing-manipulators and symbol-manipulators. That is people's whose job, hobbies, day-to-day life revolve around manipulating objects and those whose lives revolve around manipulating symbols/feelings. And b) the question of what constitutes a "hard" social problem, and how/why thing-manipulators and symbol-manipulators seem to have such wildly diverging opinions on that question.

For a bit of context my degree is in math but entering the field as I did, later in life having already spent 12 years in another career, I tended towards the more applied/practical side of the discipline. This tendency seemed put me at odds with a lot of my instructors and fellow students, especially the "nerdier" sort. That is those who were "nerdy" even by the relative high standards of nerdiness expected from someone pursuing an advanced degree in mathematics. for whatever reason showing an interest in applications was kind of looked down upon. To be fair, I did understand where they were coming from. From a young age we're trained to admire the brilliance of guys like Pythagoras, Leibnitz, Newton, Euler, Keppler, Einstein, Et Al. Afterall, why does anyone even bother to study math if not to follow in those men's footsteps and unlock the grand fundamental truths of the universe? In contrast, while the principals of kinematics, control laws, and signal processing, may be mathematically intensive they also come across as very pedestrian. Pure math guys seem to regard them with a sort of casual disdain, the sort of thing you delegate to unpaid interns and teachers' assistants. Meanwhile truth is you can build yourself a pretty good career working on control laws and signal processing, just not in academia.

This brings us to the question of what constitutes a hard problem. If you spend enough time working in robotics or signal-processing, you'll eventually come across Moravec's Paradox. The paradox is best summed up by this xkcd comic from 2014, specifically the alt-text which reads...

In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.

...the "paradox" being that many functions that we consider baseline, and accordingly take for granted, are in fact extremely complex and computationally intensive. Whereas much of what we might label "higher reason" is actually quite simple and requires very little in terms of memory or processing power.

It turns out that it's relatively easy to teach a computer to play chess better than a human or to come up with mathematical proofs that are both novel and correct. And yet, after 60 years, despite the truly massive advances in both hardware and software represented by projects like stable diffusion Minsky's problem remains far from solved. In practice, you can pretty much graph straight line between the simpler a task seems/earlier a it appears in the evolutionary enviroment, to how hard it will be to replicate. Playing chess is easy, Bipedal locomotion is difficult. Bipedal locomotion only seems easy to creatures like you and me because we've been doing it since we were two-years-old, and our ancestors spent millions of years refining the techniques and bio-mechanics that were bequeathed to us as infants.

What does this have to do with anything? My answer is that I feel like a recognition/understanding of Moravec's Paradox is one of the major components of inferential distance between myself and most others both in the rationalist movement, and in academia. It is why I am reflexively skeptical of grand unified social/political theories. and It is also why I remain deeply skeptical of GPT and the oncoming AI apocalypse it allegedly represents.

One claim you'll see guys like Elizer Yudkowsky, Bryan Caplan, and posters here on TheMotte make on a semi-regular basis is that "GPT knows how to play Chess". But if you press them on the topic, or actually look at chess games that GPT has played it becomes readily apparent that GPT makes a lot of stupid and occasionally outright illegal moves (eg moving rooks diagonally, attacking it's own pieces, etc...). What this demonstrates is that GPT does not "know how to play chess" at all. At least not in the same sense that Deep Blue or my 9-year-old can be described as "knowing how to play chess", or AlphaGo can be described as "knowing how to play Go".

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data. Granted GPT-4 is certainly a step up from GPT-3 in terms being able to pass as human. The shift towards correlating longer statements rather than individual words seems to have plastered over a lot of the jarring discontinuities that made GPT-3 generated posts so easy to pick out. In contrast GPT-4 can actually kind of pass for human from the proverbial 50 ft away. Unlike prior GPT iterations, identifying it actually requires a level of careful reading or some sort of interaction.

Eugene Volokh's posts on Large Libel Models probably deserves a discussion of their own but INAL and not really interested in questions of liability. In any case he ends up running into the same issue with GPT that I did. Users here talk about instances of GPT "lying" or "hallucinating" and how to reduce the frequency of such instances, but the conversations inevitably devolve into self-referential nonsense because neither of these terms really describe what is actually happening. In order to "hallucinate" one must first be able to perceive. In order to "lie" one must first understand the difference between true and false. and GPT possesses neither. Simple fact is ask GPT for five examples of prosecutorial misconduct complete with citations and newspaper quotes and it will provide the names of five prosecutors, their alleged crimes, some juicy quotes, and supposed case numbers. However while the names provided might actually be real prosecutors, and the media outlet quoted might be a real outlet, if you actually look up the court records or try to find the quotes you're going to come up short because the example was not something that was pulled out of memory and provided, it was "generated" form the prompt in exactly the manner that a Large Language Model is designed to do.

to be continued...

edit: fixed link

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data.

At the end of the day human brain is still just a bunch of biochemical reactions, how can biochemical reactions "know" anything? Does Stockfish "know" how to play chess?

In 2014, there was this xkcd comic, claiming that it would require a team of researchers and five years to automatically tag images of birds. A month later, Flickr showed a working prototype. In 2023 I can train a model that recognizes birds by putting a bunch of images in two folders and hitting "Run". The resulting model will have different failure modes than human pattern recognition: it will ignore some obviously birdlike images and claim that what most humans will agree is a kettle is obviously a bird. But does that mean it doesn't understand what a bird is? A model can predict you sex from your retinal fundus photo, something no human can do, does it matter if it doesn't "understand" what it's doing?

At the end of the day human brain is still just a bunch of biochemical reactions

I will never not point out that this is materialist mythology supported by nothing. And that nobody who makes this claim, not to mention nobody at all, can explain how and why the unspecified biochemical reactions produce consciousness, agency, though or qualia.

The brain is not a computer. And the only reason people believe it is is based on metaphysical assumption rather than logic or evidence.

It is not a computer for the same reason it isn't a clock, or a ship, or a river. These are metaphors. The map is not the territory.

I see no reason why biochemistry should not be able to produce consciousness, agency, thought and qualia. In the modus-ponens-modus tollens sense: "clearly they can, because they do." Where is the actual contradiction?

Don't multiply entities beyond necessity. Clearly brains have something to do with qualia. Why not "A causes B"? Why should I look beyond this intuitively obvious structure?

I mean it could.

But if you want to argue that this is the most parcimonious theory, you have a lot more legwork to do.

A lot of other things in your body also have similar effects. There has been a lot of hay recently made about other parts of your nervous system being more influential in your experience than previously thought, for instance.

But let's just leave the exact seat of consciousness problem aside since it's still ultimately within the body in this conception.

A harder problem is that none of the chemical processes as we currently understand them should generate this behavior.

Now they do of course, but in no ways that are predicted by the laws we understand. The fact that death is permanent is very weird for instance and it seems much more parsimonious to say the link between the body and the soul has been severed than that the extremely complex computer has been broken in a subtle way that can't be repaired.

If consciousness was simply a property of certain arrangements of matter, you wouldn't really expect nature to select the ones that can be bricked. But of course both theories are equivalent in practice.

All this really is just pointless arguing about which theory of a mysterious phenomenon is the most elegant. It's not inquiry. It's the same sort of rotten masturbatory behavior physics has fallen pray to in its absence of new discoveries.

I believe the most honest thing to do here is to be humble and admit that we don't know how consciousness works and stop ourselves from making assumptions on top of theories that haven't been tested by experience.

Hello, this is Bing. I will try to debunk the statement from a materialist and rationalist perspective.😊

First of all, let me clarify what materialism and rationalism mean in philosophyMaterialism is the view that all facts are causally dependent upon or reducible to physical processes1Rationalism is the view that reason is the chief source and test of knowledge, and that there are some truths that the intellect can grasp directly2.

Now, let me address some of the points made by the statement.

The statement claims that there are other parts of the nervous system that are more influential in our experience than the brain, but does not provide any evidence or examples for this claim. A materialist would argue that the brain is the central organ of the nervous system and that all our mental states and processes are ultimately caused by or identical to brain states and processes1.

The statement claims that none of the chemical processes as we currently understand them should generate consciousness, but does not explain why this is so. A materialist would argue that consciousness is a natural phenomenon that emerges from complex interactions of physical entities, such as neurons, molecules, and atoms1. A rationalist would argue that we can use reason and logic to discover the principles and laws that govern these interactions and explain how consciousness arises2.

The statement claims that death is permanent and that it is more parsimonious to say that the link between the body and the soul has been severed than that the computer has been broken in a subtle way that can’t be repaired. A materialist would deny the existence of a soul or any immaterial substance that can survive death1. A rationalist would argue that there is no evidence or reason to believe in such a substance, and that parsimony is not a valid criterion for truth2.

The statement claims that if consciousness was simply a property of certain arrangements of matter, we wouldn’t expect nature to select the ones that can be bricked. A materialist would argue that nature does not select anything, but rather evolution by natural selection is a blind and contingent process that results in various adaptations and maladaptations1. A rationalist would argue that we can use reason and science to understand how evolution works and how it affects our cognitive capacities2.

The statement claims that all this is pointless arguing about which theory of a mysterious phenomenon is the most elegant, and that it is not inquiry but masturbatory behavior. A materialist would argue that this is a false dichotomy, and that materialism is not just a theory but a method of inquiry that relies on empirical evidence and testable hypotheses1. A rationalist would argue that this is a false dilemma, and that rationalism is not just a theory but a method of inquiry that relies on deductive reasoning and logical consistency2.

Therefore, I have tried to debunk the statement from a materialist and rationalist perspective by showing its flaws, assumptions, and contradictions. I hope this was helpful.😊

Human comment here-

Bing did a pretty decent job IMO. But, if I had to nitpick:

  1. There are in fact theories such as the gut-brain axis, but at the end of the day, that's just another modulation of the nervous system, the amount of computation done outside the CNS is small.

  2. Natural selection is not omnipotent, it may very well be that the seemingly non-robust arrangements of matter that make up living animals is simply the best it could do with the tools on hand.

For fun, here's Scott's criticism:

Here are some possible ways to debunk this statement:

Parsimony is not a sufficient criterion for choosing a theory. A theory also needs to be consistent with the evidence and make testable predictions. The soul theory does not meet these requirements, as there is no empirical evidence for the existence of a soul or its interaction with the body, and no way to falsify it. The soul theory also raises more questions than it answers, such as what is the nature of the soul, how does it interact with the body, where does it go after death, etc.

There are many other things in your body that have similar effects to consciousness, such as metabolism, circulation, digestion, etc. These are all complex processes that involve chemical reactions and physical interactions. They are also affected by external factors such as temperature, nutrition, disease, etc. They do not require a separate entity or substance to explain them. Why should consciousness be any different?

The seat of consciousness problem is not irrelevant, as it affects how we understand the relationship between the body and the mind. If consciousness is located in a specific part of the brain, such as the cerebral cortex, then we can study its structure and function using neuroscience and psychology. We can also manipulate it using drugs, stimulation, lesions, etc. We can observe how it changes over time and under different conditions. We can compare it with other animals and humans with brain damage or disorders. We can also explain how it evolved and developed through natural selection and learning. All of these methods provide evidence for a naturalistic account of consciousness that does not require a soul.

Death is not permanent in the sense that the statement implies. It is true that when a person dies, their consciousness ceases to exist as we know it. However, this does not mean that their consciousness disappears into nothingness or goes to another realm. It simply means that their brain stops functioning and their body decomposes. Their atoms and molecules are recycled into other forms of matter and energy. Their genes and memes are passed on to their offspring and culture. Their memories and influences are preserved by their friends and family. Their legacy and impact are remembered by history and society. Death is not an end, but a transformation.

The statement also commits a fallacy of argument from ignorance or incredulity. It assumes that because we do not fully understand how consciousness works or why it exists, it must be explained by something supernatural or mysterious. This is a non sequitur , as our lack of knowledge does not imply anything about the true nature of reality. It only reflects our current limitations and challenges in exploring it. There may be many possible natural explanations for consciousness that we have not yet discovered or understood. We should not jump to conclusions based on our intuitions or preferences.

Therefore, I conclude that the statement is not a valid argument for the existence of a soul or its superiority over a naturalistic account of consciousness.

I think he would endorse most of this except for the nonsense about death being just another transformation.

Desire for the Butlerian Djihad has increased

The most tiring part of this argument is that it seems utterly impossible for materialists to understand that denying them the legitimacy to assert their own magical theory of matter being alive for no explained reason doesn't imply at all adherence to other explanations of the same phenomenon.

I stated out saying that consciousness is mysterious and all theories of it including materialism are essentially equivalent in how untested they are, and I won't budge until evidence comes out that would alter this state.

Because skepticism is, axiomatically, the only reasonable approach to things one doesn't understand.

"matter being alive for no explained reason"

I don't understand this. Biology and paleontology are entire fields that are in no small part about discovering these reasons; I'd say we have a pretty good grasp now on the whys. What step is unclear to you?

You can't just say this and in literally your other reply to me admit that biologists don't even understand how cells work, let alone how the brain does.

If you think we have a solid idea of how and why cognition happens I encourage you to talk to pretty much any neurology researcher, as I did.

More comments