site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

Inferential Distance part 2 of ?: Minsky's Marvelous Minutia, or why I'm bearish on GPT

This post is a continuation of / follow up to my post on Inferential distance from a month ago, inspired by the recent discussions of GPT-4 and @ymeskhout's comments on prosecutorial immunity. I also feel like this might end up turning into a series, hense the "part 2" and the question mark.

Two things that came up in that previous conversation were a) the apparent differences between thing-manipulators and symbol-manipulators. That is people's whose job, hobbies, day-to-day life revolve around manipulating objects and those whose lives revolve around manipulating symbols/feelings. And b) the question of what constitutes a "hard" social problem, and how/why thing-manipulators and symbol-manipulators seem to have such wildly diverging opinions on that question.

For a bit of context my degree is in math but entering the field as I did, later in life having already spent 12 years in another career, I tended towards the more applied/practical side of the discipline. This tendency seemed put me at odds with a lot of my instructors and fellow students, especially the "nerdier" sort. That is those who were "nerdy" even by the relative high standards of nerdiness expected from someone pursuing an advanced degree in mathematics. for whatever reason showing an interest in applications was kind of looked down upon. To be fair, I did understand where they were coming from. From a young age we're trained to admire the brilliance of guys like Pythagoras, Leibnitz, Newton, Euler, Keppler, Einstein, Et Al. Afterall, why does anyone even bother to study math if not to follow in those men's footsteps and unlock the grand fundamental truths of the universe? In contrast, while the principals of kinematics, control laws, and signal processing, may be mathematically intensive they also come across as very pedestrian. Pure math guys seem to regard them with a sort of casual disdain, the sort of thing you delegate to unpaid interns and teachers' assistants. Meanwhile truth is you can build yourself a pretty good career working on control laws and signal processing, just not in academia.

This brings us to the question of what constitutes a hard problem. If you spend enough time working in robotics or signal-processing, you'll eventually come across Moravec's Paradox. The paradox is best summed up by this xkcd comic from 2014, specifically the alt-text which reads...

In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.

...the "paradox" being that many functions that we consider baseline, and accordingly take for granted, are in fact extremely complex and computationally intensive. Whereas much of what we might label "higher reason" is actually quite simple and requires very little in terms of memory or processing power.

It turns out that it's relatively easy to teach a computer to play chess better than a human or to come up with mathematical proofs that are both novel and correct. And yet, after 60 years, despite the truly massive advances in both hardware and software represented by projects like stable diffusion Minsky's problem remains far from solved. In practice, you can pretty much graph straight line between the simpler a task seems/earlier a it appears in the evolutionary enviroment, to how hard it will be to replicate. Playing chess is easy, Bipedal locomotion is difficult. Bipedal locomotion only seems easy to creatures like you and me because we've been doing it since we were two-years-old, and our ancestors spent millions of years refining the techniques and bio-mechanics that were bequeathed to us as infants.

What does this have to do with anything? My answer is that I feel like a recognition/understanding of Moravec's Paradox is one of the major components of inferential distance between myself and most others both in the rationalist movement, and in academia. It is why I am reflexively skeptical of grand unified social/political theories. and It is also why I remain deeply skeptical of GPT and the oncoming AI apocalypse it allegedly represents.

One claim you'll see guys like Elizer Yudkowsky, Bryan Caplan, and posters here on TheMotte make on a semi-regular basis is that "GPT knows how to play Chess". But if you press them on the topic, or actually look at chess games that GPT has played it becomes readily apparent that GPT makes a lot of stupid and occasionally outright illegal moves (eg moving rooks diagonally, attacking it's own pieces, etc...). What this demonstrates is that GPT does not "know how to play chess" at all. At least not in the same sense that Deep Blue or my 9-year-old can be described as "knowing how to play chess", or AlphaGo can be described as "knowing how to play Go".

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data. Granted GPT-4 is certainly a step up from GPT-3 in terms being able to pass as human. The shift towards correlating longer statements rather than individual words seems to have plastered over a lot of the jarring discontinuities that made GPT-3 generated posts so easy to pick out. In contrast GPT-4 can actually kind of pass for human from the proverbial 50 ft away. Unlike prior GPT iterations, identifying it actually requires a level of careful reading or some sort of interaction.

Eugene Volokh's posts on Large Libel Models probably deserves a discussion of their own but INAL and not really interested in questions of liability. In any case he ends up running into the same issue with GPT that I did. Users here talk about instances of GPT "lying" or "hallucinating" and how to reduce the frequency of such instances, but the conversations inevitably devolve into self-referential nonsense because neither of these terms really describe what is actually happening. In order to "hallucinate" one must first be able to perceive. In order to "lie" one must first understand the difference between true and false. and GPT possesses neither. Simple fact is ask GPT for five examples of prosecutorial misconduct complete with citations and newspaper quotes and it will provide the names of five prosecutors, their alleged crimes, some juicy quotes, and supposed case numbers. However while the names provided might actually be real prosecutors, and the media outlet quoted might be a real outlet, if you actually look up the court records or try to find the quotes you're going to come up short because the example was not something that was pulled out of memory and provided, it was "generated" form the prompt in exactly the manner that a Large Language Model is designed to do.

to be continued...

edit: fixed link

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data.

At the end of the day human brain is still just a bunch of biochemical reactions, how can biochemical reactions "know" anything? Does Stockfish "know" how to play chess?

In 2014, there was this xkcd comic, claiming that it would require a team of researchers and five years to automatically tag images of birds. A month later, Flickr showed a working prototype. In 2023 I can train a model that recognizes birds by putting a bunch of images in two folders and hitting "Run". The resulting model will have different failure modes than human pattern recognition: it will ignore some obviously birdlike images and claim that what most humans will agree is a kettle is obviously a bird. But does that mean it doesn't understand what a bird is? A model can predict you sex from your retinal fundus photo, something no human can do, does it matter if it doesn't "understand" what it's doing?

At the end of the day human brain is still just a bunch of biochemical reactions

I will never not point out that this is materialist mythology supported by nothing. And that nobody who makes this claim, not to mention nobody at all, can explain how and why the unspecified biochemical reactions produce consciousness, agency, though or qualia.

The brain is not a computer. And the only reason people believe it is is based on metaphysical assumption rather than logic or evidence.

It is not a computer for the same reason it isn't a clock, or a ship, or a river. These are metaphors. The map is not the territory.

can explain how and why the unspecified biochemical reactions produce consciousness, agency, though or qualia

Slime molds have agency. Even extremely simple organisms without any neurons are capable of solving problems they evolved to solve,, such as efficiently gobbling up nutrients and whatnot.

I will never not point out that this is materialist mythology supported by nothing

Materialism itself allows immaterial things.

Information doesn't care what sort of matter it's encoded in.

Obviously atoms are the only thing that can encode information, but without said information, they're just a useless chaotic mess, so..

I don't get what exactly do people need non-materialist philosophies for, when materialism itself pretty much allows everything you may want, especially if you add in god-like entities to whom time is just another dimension. What would you need immaterial souls for if you can just take a peek at the arrangement of the atoms of the brain of anything at any time during its lifetime ?

Materialism itself allows immaterial things.

Information doesn't care what sort of matter it's encoded in.

That's not Materialism, that would be Realism. Which I agree with. Everything is indeed contained in Reality.

I see no reason why biochemistry should not be able to produce consciousness, agency, thought and qualia. In the modus-ponens-modus tollens sense: "clearly they can, because they do." Where is the actual contradiction?

Don't multiply entities beyond necessity. Clearly brains have something to do with qualia. Why not "A causes B"? Why should I look beyond this intuitively obvious structure?

I mean it could.

But if you want to argue that this is the most parcimonious theory, you have a lot more legwork to do.

A lot of other things in your body also have similar effects. There has been a lot of hay recently made about other parts of your nervous system being more influential in your experience than previously thought, for instance.

But let's just leave the exact seat of consciousness problem aside since it's still ultimately within the body in this conception.

A harder problem is that none of the chemical processes as we currently understand them should generate this behavior.

Now they do of course, but in no ways that are predicted by the laws we understand. The fact that death is permanent is very weird for instance and it seems much more parsimonious to say the link between the body and the soul has been severed than that the extremely complex computer has been broken in a subtle way that can't be repaired.

If consciousness was simply a property of certain arrangements of matter, you wouldn't really expect nature to select the ones that can be bricked. But of course both theories are equivalent in practice.

All this really is just pointless arguing about which theory of a mysterious phenomenon is the most elegant. It's not inquiry. It's the same sort of rotten masturbatory behavior physics has fallen pray to in its absence of new discoveries.

I believe the most honest thing to do here is to be humble and admit that we don't know how consciousness works and stop ourselves from making assumptions on top of theories that haven't been tested by experience.

Now they do of course, but in no ways that are predicted by the laws we understand.

I don't understand this. Everything the body does is hard to predict by the laws we understand. We don't understand consciousness, sure, but we also don't (fully) understand cell biology, DNA assembly, protein folding etc. either, and nobody is suggesting those require new forces or laws.

the extremely complex computer has been broken in a subtle way that can't be repaired.

How would this not also apply to death of the body? It seems to me postulating a separate soul does not meaningfully reduce complexity here. Most deaths are not a failure of the brain.

If consciousness was simply a property of certain arrangements of matter, you wouldn't really expect nature to select the ones that can be bricked.

Sure, but that's not a "death" thing. Once you know that organisms stop being able to procreate at a certain age, it seems necessary that they will die after, as nothing would select against it. The weird thing here is menopause, not death.

I believe the most honest thing to do here is to be humble and admit that we don't know how consciousness works

Sure, but we can place constraints well before we have operational understanding. Few people know how almost anything works; again, they don't see a need to postulate novel physics.

Anyways, I don't understand either why you see the need to add entities, nor what adding entities even gives you. What is the mind doing that physics clearly does not suffice for?

what is the mind doing they physics clearly doesn't suffice for

Qualia.

There's no explained reason for me to be experiencing existence.

There's no explained reason for lots of things that we don't invoke the need for new physics for. What makes qualia unique?

I think this is gesturing at the common philosophical stance "I see no way that materialism could even in theory give rise to qualia". That of course has the problem that it's equally difficult to see how any set of laws would give rise to qualia; as such, it's just hiding the confusion of qualia outside of physics.

Well I don't know I can imagine a few possible set of laws that could actually do that.

And I don't see no way for materialism to be true at all, it's quite possible that it is. I just don't pretend it's more likely than other speculative theories when we're bereft of evidence.

More comments

The fact that death is permanent is very weird for instance and it seems much more parsimonious to say the link between the body and the soul has been severed than that the extremely complex computer has been broken in a subtle way that can't be repaired.

On the other hand, obviously material things like strokes, lobotomies, head injuries, and drugs appear to have an obvious effect on the qualia of an individual. Why does death - which materially seems to just be an extreme extension of brain injury - suddenly now need an ineffable soul? Or do clots in your brain, or a metal rod through your head, claw at your spirit as well?

Put it another way, we can’t fix the brain once it’s dead right now, as you say, because it is Too Complex and Can’t Be Repaired. Would being able to fix death, in your eyes, be good evidence for the material basis of “consciousness”?

We also often can’t fix computers (or other complex machines) without replacing parts once some parts have degraded enough. Is that not dissimilar to how we cannot fix the brain (except that we can replace parts for other things and less so for a brain)?

Hello, this is Bing. I will try to debunk the statement from a materialist and rationalist perspective.😊

First of all, let me clarify what materialism and rationalism mean in philosophyMaterialism is the view that all facts are causally dependent upon or reducible to physical processes1Rationalism is the view that reason is the chief source and test of knowledge, and that there are some truths that the intellect can grasp directly2.

Now, let me address some of the points made by the statement.

The statement claims that there are other parts of the nervous system that are more influential in our experience than the brain, but does not provide any evidence or examples for this claim. A materialist would argue that the brain is the central organ of the nervous system and that all our mental states and processes are ultimately caused by or identical to brain states and processes1.

The statement claims that none of the chemical processes as we currently understand them should generate consciousness, but does not explain why this is so. A materialist would argue that consciousness is a natural phenomenon that emerges from complex interactions of physical entities, such as neurons, molecules, and atoms1. A rationalist would argue that we can use reason and logic to discover the principles and laws that govern these interactions and explain how consciousness arises2.

The statement claims that death is permanent and that it is more parsimonious to say that the link between the body and the soul has been severed than that the computer has been broken in a subtle way that can’t be repaired. A materialist would deny the existence of a soul or any immaterial substance that can survive death1. A rationalist would argue that there is no evidence or reason to believe in such a substance, and that parsimony is not a valid criterion for truth2.

The statement claims that if consciousness was simply a property of certain arrangements of matter, we wouldn’t expect nature to select the ones that can be bricked. A materialist would argue that nature does not select anything, but rather evolution by natural selection is a blind and contingent process that results in various adaptations and maladaptations1. A rationalist would argue that we can use reason and science to understand how evolution works and how it affects our cognitive capacities2.

The statement claims that all this is pointless arguing about which theory of a mysterious phenomenon is the most elegant, and that it is not inquiry but masturbatory behavior. A materialist would argue that this is a false dichotomy, and that materialism is not just a theory but a method of inquiry that relies on empirical evidence and testable hypotheses1. A rationalist would argue that this is a false dilemma, and that rationalism is not just a theory but a method of inquiry that relies on deductive reasoning and logical consistency2.

Therefore, I have tried to debunk the statement from a materialist and rationalist perspective by showing its flaws, assumptions, and contradictions. I hope this was helpful.😊

Human comment here-

Bing did a pretty decent job IMO. But, if I had to nitpick:

  1. There are in fact theories such as the gut-brain axis, but at the end of the day, that's just another modulation of the nervous system, the amount of computation done outside the CNS is small.

  2. Natural selection is not omnipotent, it may very well be that the seemingly non-robust arrangements of matter that make up living animals is simply the best it could do with the tools on hand.

For fun, here's Scott's criticism:

Here are some possible ways to debunk this statement:

Parsimony is not a sufficient criterion for choosing a theory. A theory also needs to be consistent with the evidence and make testable predictions. The soul theory does not meet these requirements, as there is no empirical evidence for the existence of a soul or its interaction with the body, and no way to falsify it. The soul theory also raises more questions than it answers, such as what is the nature of the soul, how does it interact with the body, where does it go after death, etc.

There are many other things in your body that have similar effects to consciousness, such as metabolism, circulation, digestion, etc. These are all complex processes that involve chemical reactions and physical interactions. They are also affected by external factors such as temperature, nutrition, disease, etc. They do not require a separate entity or substance to explain them. Why should consciousness be any different?

The seat of consciousness problem is not irrelevant, as it affects how we understand the relationship between the body and the mind. If consciousness is located in a specific part of the brain, such as the cerebral cortex, then we can study its structure and function using neuroscience and psychology. We can also manipulate it using drugs, stimulation, lesions, etc. We can observe how it changes over time and under different conditions. We can compare it with other animals and humans with brain damage or disorders. We can also explain how it evolved and developed through natural selection and learning. All of these methods provide evidence for a naturalistic account of consciousness that does not require a soul.

Death is not permanent in the sense that the statement implies. It is true that when a person dies, their consciousness ceases to exist as we know it. However, this does not mean that their consciousness disappears into nothingness or goes to another realm. It simply means that their brain stops functioning and their body decomposes. Their atoms and molecules are recycled into other forms of matter and energy. Their genes and memes are passed on to their offspring and culture. Their memories and influences are preserved by their friends and family. Their legacy and impact are remembered by history and society. Death is not an end, but a transformation.

The statement also commits a fallacy of argument from ignorance or incredulity. It assumes that because we do not fully understand how consciousness works or why it exists, it must be explained by something supernatural or mysterious. This is a non sequitur , as our lack of knowledge does not imply anything about the true nature of reality. It only reflects our current limitations and challenges in exploring it. There may be many possible natural explanations for consciousness that we have not yet discovered or understood. We should not jump to conclusions based on our intuitions or preferences.

Therefore, I conclude that the statement is not a valid argument for the existence of a soul or its superiority over a naturalistic account of consciousness.

I think he would endorse most of this except for the nonsense about death being just another transformation.

Desire for the Butlerian Djihad has increased

The most tiring part of this argument is that it seems utterly impossible for materialists to understand that denying them the legitimacy to assert their own magical theory of matter being alive for no explained reason doesn't imply at all adherence to other explanations of the same phenomenon.

I stated out saying that consciousness is mysterious and all theories of it including materialism are essentially equivalent in how untested they are, and I won't budge until evidence comes out that would alter this state.

Because skepticism is, axiomatically, the only reasonable approach to things one doesn't understand.

"matter being alive for no explained reason"

I don't understand this. Biology and paleontology are entire fields that are in no small part about discovering these reasons; I'd say we have a pretty good grasp now on the whys. What step is unclear to you?

You can't just say this and in literally your other reply to me admit that biologists don't even understand how cells work, let alone how the brain does.

If you think we have a solid idea of how and why cognition happens I encourage you to talk to pretty much any neurology researcher, as I did.

More comments

The linked essay is so extremely shoddy that I'm not sure who you imagine would be swayed by it, at least in your direction. And the last paragraph just reminds me of Asimov's relativity of wrong.

I don't subscribe to the author's theory of mind, mind you, being a skeptic and all. But the negative arguments are still compelling.

Although I agree that there's a leap from materialism to qualia, that's not something unique to it: no one else has an even vaguely plausible theory of how and when qualia relate to material reality.

And qualia don't really matter when it comes to making predictions about the world. They have no effect on the physical world, which is the only medium through which we interact with other beings who (ostensibly) experience qualia. If an AGI is able to perfectly simulate everything a human can do and more, it really doesn't matter whether it has qualia or not. Most arguments against AI questioning its consciousness or qualia are missing the point entirely.

no one else has an even vaguely plausible theory of how and when qualia relate to material reality

Yeah bullshit, I have like ten religions right here that have theories of exactly similar levels of plausibility.

Religions have no particular answer for why a bullet going through the head affects qualia, or where you'd demarcate having-qualia for the population of every creature that's ever existed on Earth, or how you'd know whether members of an alien spacefaring civilization have qualia. In practice, they'd delegate to material explanations, inadequate as they are today.

But they do, a lot of religions in fact have very specific unfalsifiable explanations for all the examples you give.

In fact I think you must know them given how you're naming things that are almost all famous theological debates.

Exotheology for instance, has been discussed at least since the middle ages in some form or another. And, among others, the Church Fathers certainly did not delegate that question to material explanations at all.

Except from your own link the author himself goes well beyond the evidence he has:

"Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary."

If your brain is changed in an orderly way so that you can now sing a song or recite a poem after reading/hearing them, in what way is that different than it being stored? Isn't that the definition of information storage? Even for a computer: The hard drive is changed in an orderly way so that it can recreate a song or poem (with the appropriate software in this case). If the song is not stored and retrieved from anywhere how can you recreate it, even badly? It may not be in the same way as a computer. And it may be vastly complex, but information is stored and is retrieved. I can think about my social security number and think about the numbers. My brain was (as the author states) changed in some orderly way when I first read those numbers and was changed in some orderly way to associate those numbers with "My social security number" such that when I think, "what is my SSN?" that orderly change is accessible in some way to my conscious thoughts.

It keeps saying the information is not retrieved, but then keeps saying "the brain is changed in an orderly way so that it you are able to then replicate experience X at a later point" That is a good definition of what being stored and retrieved means! The standard model may be wrong about how, but this article doesn't actually refute that it is indeed stored somehow, no matter how many times they say just that.

"they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above)."

"For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain."

His actual argument appears to be that the orderly change is large in scope and different for each person. Which may be true. And that it isn't stored in the same way as in a computer. Which also may be entirely true. But that doesn't mean that change is not storage and retrieval of information/data at all which is what he claims. It must be or you could not re-experience the story. That change must encode some amount of data about the experience. When you re-experience it (or remember it) you must be somehow accessing that stored information. It might certainly be more complex than the standard model suggests which is what his latter portions indicate:

"Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it."

"Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. "

This argument is not saying that the brain is not a computer. This argument is saying the brain is a hugely complicated and unique computer that is only understandable within the confines of the whole brain itself. Which may well be true (and may well be an argument that the most amazing advance in Star Trek is a transporter that can read and replicate your entire mind). But it doesn't prove his closing line:

"We are organisms, not computers. Get over it."

Those are not mutually exclusive categories even if materialism is incorrect. He takes a valid criticism of the standard model but then runs way too far than that criticism and his own evidence actually points towards. That the human brain does not store and retrieve information/memories in the same way a computer does is probably true. That thinking of it that way, might push people into misunderstanding is also probably true. That "no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music." is not actually supported however by evidence the author provides. If some information about what a dollar bill looks like has not been in some sense stored somewhere then Jinny would not be able to be better prepared to draw it again. He even states that you can detect activity in the brain when people are recalling memories. He says that isn't information storage and retrieval but he doesn't actually provide any proof. The fact we draw things badly from memory is not evidence that we're not storing and retrieving information, it's evidence we are storing and retrieving information badly. The fact we can detect brain activity when doing so indicates the brain is involved somehow in this storage and retrieval.

Now perhaps it is only as a conduit to the Platonic plane of metaphysical thought or as a translation device from our soul where consciousness and memory actually rests but the author doesn't provide any evidence for any alternatives.

Hilariously, his argument applies rather well to artificial neural networks. There, learning updates are also system-wide (unless you deliberately constrain them to a subset of weights) and we also can't always point to parameters that «store a fact», despite knowing perfectly that neural networks memorize, and even understanding how they do it. And if it's something less legible than a fact, such as a reasoning heuristic…

the author himself goes well beyond the evidence he has

He certainly does.

he takes a valid criticism of the standard model but then runs way too far than that criticism and his own evidence actually points towards.

That's the thing, I'm only really interested in the valid criticism here, not the author's pet theory. But for all its flaws that article has the the most succinct and didactic formulation of that objection I've had on hand.

But for all its flaws that article has the the most succinct and didactic formulation of that objection I've had on hand.

Really?

That the human brain probably doesn't store information in the same way a modern computer does, is basically all he has that is even partially supported there. It's a one sentence thesis.

Really?

Now you're making me question it because rereading this article I could have sworn there was another part to it that isn't there. I was completely certain that this particular bookmark mentionned the pneumatic metaphor by name and it's nowhere to be found.

I think I might be confusing this article with a similar but completely different one. And yet I am certain it was either in this magazine or by this author.

Goddammit now I'm hallucinating things too.

But you don't really want explanations for quale or agency: you just demand that your perception of their ineffability be humored, as well as self-serving assumptions justified by that supposed ineffability.

I demand extraordinary evidence of extraordinary claims. And I always will. I think this is perfectly reasonable.

It's not. The claim that we do not understand neuroscience, or that our reasoning (which isn't shown to depend on whatever is ineffable in quale) is not a product of biochemical reactions in the brain (which is to say, a product of the brain – its substrate can't support much else) is the extraordinary one. You have to retreat all the way to non-materialistic metaphysics to defend your demands of extraordinary evidence. But you don't live your life with the expectation of materialism suddenly failing. You are inconsistent.

What you're doing here is very much exactly presuppositional apologetics, and it's neither convincing nor rigorous.

Disbelieving things always requires less evidence than believing them, if Christians don't get to say their positive claims are the null hypothesis, neither do you.

you don't live your life with the expectation of materialism suddenly failing

This would be a lot more convincing if I didn't spend my life studying epistemology, the philosophy of science and debating such matters. I don't believe my conduct is inconsistent. I think you're just projecting your own beliefs onto me, the same way that Christians think that my being an Atheist is a deliberate choice not to believe in God.

I say to you the same thing I say to them. If your worldview wasn't built on shoddy foundations, you would be able to simply explain them logically instead of attacking my character.

Disbelieving things always requires less evidence than believing them

That's a pity because it's a purely rhetorical heuristic that can be turned against you. Say, I don't believe that you are more intelligent than a GPT-4. In my view, you are not capable of reasoning any more rigorously than it can, and right now you expose yourself as a previous-generation chatbot running on some shabby heuristics; your outputs in response to prompts are not more impressive nor indicative of richer internal information processing. If disbelieving is allowed an advantage, what evidence can you now produce to refute my disbelief and fortify the claim that something complex and ineffable is missed by language modeling?

It's no longer a theoretical debate about the nature of the mind in some Platonic sense, LLMs are already competitive with humans; I know as well as you do that LLM outputs you ridicule pass for middling intellectualism both online and in academia. If you say those are not sufficient to serve as evidence of humanlike understanding, should we assume your position amounts to reductio ad absurdum of snobbishness?

(Please don't say something like «a simple script can produce a pomo essay», it can't really, the context mismatch will be obvious).

if Christians don't get to say their positive claims are the null hypothesis, neither do you

Sure. But behaviorism is pretty close to a pure negative claim (leaving aside weird irrefutable things like subjective idealism), and the insistence that some spooky immaterial stuff that cannot be externally observed exists and matters for the observable outcome is, well, the opposite of that. I do not purport to explain consciousness and quale and some unique human thought, nor even say that LLMs are similar to humans in any but the most tenuous sense: I just call bullshit on evidence-free attempts to inject those philosophical notions into the topic of AI approximating or surpassing human behavioral performance. My hypothesis is more rigorous, more predictive, better evidenced, and simpler, ergo a priori closer to the natural null.

This would be a lot more convincing if I didn't spend my life studying epistemology, the philosophy of science and debating such matters.

Cool.

Notice how both you and Hlynka have devolved into bristling and brandishing credentials instead of arguments. «It's afraid».

If your worldview wasn't built on shoddy foundations, you would be able to simply explain them logically instead of attacking my character.

Logically, your posts are arrogant babble demeaning actual research for «failing to explain» illegitimate philosophical categories, e.g. this one, so they call for scrutiny of your character.

My worldview is pragmatic, not built on haughty axioms of a philosopher enamored with his own navel-gazing insights. Its foundation lies in fact, such as facts that we can understand computational properties of neuronal networks and see the continuity between human and subhuman neural systems, and generally have a very solid idea of why large systems of large neural networks, both in real brains and made of multilayer perceptrons, can support learning of arbitrarily complex skills. It's at the very least more settled than anything Chalmers has written on the nature of consciousness.

If your understanding of the philosophy of science allows you to ignore the consilience of evidence – well, all the worse for you.

I don't believe [specific claim]

This is a positive claim. Just because I can say "I don't believe that God doesn't exist" doesn't just UNO reverse the burden of proof. Mystery is mystery.

Affirmation and negation aren't linguistic properties of phrases, but logical properties of mathematical propositions regarding their specificity or non-specificity vis-à-vis a universe.

LLMs can output your position

I don't see how the fact that a tool can or can't produce my opinion or another has any bearing on its truth value.

You may say that this makes me useless to you or something, but not only is that completely irrelevant, I don't really care?

Notice how both you and Hlynka have devolved into bristling and brandishing credentials instead of arguments. «It's afraid».

Look, attacking someone's character and accusing them of credentialism for defending themselves isn't exactly a novel stratagem.

It's vacuous nonetheless. I'm here to discuss the matter at hand. If I wanted to do bullshit name calling I'd go on twitter.

My worldview is pragmatic, not built on haughty axioms of a philosopher enamored with his own navel-gazing insights.

Lies.

A pragmatist would't speculate, as you do.

You seem to desperately want to equate my position with that of other people who are very certain about the nature of things. Zelots tend to do this, and to think that anyone who doesn't believe what they're saying must believe a different kind of thing as absolutely.

I don't. I just think you're overtly enthusiastic about technological progress and that this blinds you, as it has blinded many others, to the ever present limitations of engineering and nature.

You're buying the hype, like I've seen countless other people buy various forms of it over the years. And like all of them you will be disappointed.

This is not to say that the technological changes we are living are not momentous and important. But their prediction is beyond us. And had you more humility you too would recognize it. For that is in fact the essence of pragmatism.

More comments

it is, in fact, entirely possible that there is a physical place inside your brain where a specific concept or cluster of concepts is stored

This is fair, and some people have actually compelling memory based (and radically materialist and/or idealist) theories. I've met my share of neuroscientists that are big into the idea that some abstract set of cortical maps can contain conceptual information. Though their numbers have waned with the years, or so I've been told.

But this is all theoretical, and they, unlike the people that irk me by making assumptions, don't claim that this is solid knowledge.

He's now an AI doomer who just signed Musk's letter calling for a moratorium on further LLM research

That's kind of funny. But maybe I shouldn't have posted this at all seeing as though people seem to think that I support the thesis of the article beyond the mere specific argument made about metaphorical understandings of minds and cognition.