site banner

There’s no psychopathology.

I’d like to start with a few disclaimers. This is not an anti-psychiatry post. This is also not the place to ask or receive advice about your mental health, or what nowadays is called “mental health”.

For some time now I’ve been feeling like I live in a different world than most people I know. It has come to a point where I have to face an awkward alternative: Either most people I know are wrong (including learned men and experts) or I am insane. As I don’t believe I have lost my sanity, and as I believe that I have very strong arguments to hold my ideas against all reasonable counterarguments; I think it’s about time I sit down and share my ideas more or less publicly. This is one of such ideas. What follows is the summary of my academic studies, my professional experience working in the field of mental health, my own personal thoughts, and the influence of several authors, chiefly Georges Canguilhem and Jacques Lacan.

The APA defines psychopathology as “the scientific study of mental disorders, including their theoretical underpinnings, etiology, progression, symptomatology, diagnosis, and treatment”. It is a jurisdiction of medicine, although that does not exclude other disciplines from delving into it as well. It is intrinsically linked to psychiatry, to the point where one cannot exist without the other. But psychiatry itself is a rather contradictory branch of medicine, because while every other specialization of medicine has built its own object of study by exploring a specific organ or function from the body, psychiatry exists only by virtue of that which it ignores. In its origins, psychiatry was born to deal with what has been classically classified as insanity, those people described by Descartes who believed they were made of glass or who fancied themselves to be pitches of water. These outlandish delusions have always caused turmoil in society because nobody really knows where they come from, what they mean and, most importantly, what to do with them. Insane people clearly need help but they do not want it, or what help they are willing to receive it’s impossible for other people to give. They break the law but they are not criminals, or at least they are bening. They behave like savages but are human beings and deserve to be treated as such.

Now enter the Enlightenment: Lady Reason triumphs all over the Western world, everything now has or will have a place and an explanation in the encyclopedia of universal knowledge. And what we understand we control. There are now a bunch of physicians who have little evidence but little doubt that these people are sick and that it is their task to heal them. And that they’ll try with all their available resources, but with little success. So while neurology developed from the study of the brain, cardiology from that of the heart and so on, psychiatry was born out of sheer embarrassment. It is the branch of medicine that studies mental disorders. However, being a part of modern scientific medicine, it cannot but assert that mental disorders can be explained by studying the body, the contradiction being that the day psychiatry discovers the bodily cause of mental disorders will be the day that it ceases to exist as a specialization of medicine, for said cause would fall under the jurisdiction of another specialization: If it’s in the brain then it would be neurology, if it’s in the genes it would be medical genetics, and if we were to discover a new organ in the body then a new specialization will be born to study it, leaving psychiatry in the past.

Therefore, psychiatry exists only because we do not know what mental disorders are. In fact, we don’t even know if the mind is real or not, much less whether it can get sick. What do we actually know then? We know that 1. there are people who need help, and 2. that there are means to help them. So it becomes a matter of administering a scarce resource. This is what psychopathology really is: It is not a science of mental pathology, it is the art of distributing psychiatric drugs and psychological treatments.

There used to be psychopathology. Classic psychiatrists wrote impressive treaties on the subject, with thousands of pages explaining in detail and classifying the behavior of their patients. The mountains really were in labour, alas, only a mouse was born: No progress was made regarding the causes, and most importantly the treatment of such behaviors. This last problem was drastically improved by the invention of psychopharmacology. Suddenly psychiatrists had a powerful tool to treat the symptoms of insanity, so even though they weren’t any close to understanding these symptoms, they changed their ideas on the subject to reflect the influence of psychiatric drugs. These influences can be accurately gauged by the changes on the DSM. The first DSMs included theories about the origin and nature of mental disorders, the last DSMs only mention the clinical symptoms necessary to prescribe a treatment. When a patient is diagnosed with depression the only relevant information that is learned is that said patient will start a treatment for depression.

So are mental disorders real? Of course they are. Whether they are mental or disorders, that’s another question. They are real because they are a set of behaviors that have been observed to occur together: Feelings of sadness, self-harming ideas or behaviors, inability to feel pleasure, these are all things that are real, observable, measurable, and treatable. But are these symptoms a mental problem? Are they a medical problem, or a problem at all? This is highly debatable, and in any case, not a solid foundation for a science.

If a person feels sad all the time, it is only natural for them to think that this life is not worth living. But the opposite is also true: If a person is convinced that there is nothing good in this world, then they will feel sad and hopeless all the time. So what comes first? Should we treat the sadness or the thoughts? And what if the person likes to feel sad, if they don’t want any help? Should we force them? And to make matters worse, it turns out that both psychiatric drugs and psychotherapy are effective*. And this is only to talk about those treatments that have empiric evidence to back them up and are approved by psychiatry, because, under the right circumstances, literally everything can be therapeutic: There’s horse therapy, art therapy, music therapy, dog therapy, video-game therapy, you name it.

There are some who believe in the demon deceptor, a person, or a group of people, who control our reality and make lies pass for truth, usually with malicious intent. These people believe that the pharmaceutical industry has created mental disorders only to sell drugs, and that psychologists and psychiatrists are their accomplices. For my part, I think it is overly optimistic to believe that someone has such a degree of control over the situation as to make it bend to their will. I believe that people are just confused, and with good reason, because being human is quite a bizarre experience. There are of course those who profit from the confusion of their fellow man, and prey on their ignorance. But even evil has its limits, and nobody can summon such perfect wickedness that no good may come of it. The truth is that for all the confusion that our idea of psychopathology entails, the treatment and the care for people with mental disorders has progressed a great deal in the last decades.

On the other hand there are the encyclopedists, who will argue that the fact that we haven’t discovered the bodily sources of mental disorders does not mean that we won’t succeed in the future. We have certainly made discoveries in this direction: Not only do we know now that it is impossible to be sad or mad without a brain, but we also know what specific brain part or substance is required. But even after all the advances in neurology, still no neurologic exam is indicated for the diagnoses of mental disorders, and for good reason. Because ultimately, what decides if someone has a mental disorder or not are arbitrary criteria. The fact that homosexuality is no longer a mental illness is only because of the fact that society has shifted its values towards the acceptance of diverse sexual orientations, were it not for that fact we would speak about the “homosexual brain” just as we know speak about “the depressed brain”. We could also speak about “the carpenter brain” or “the the writer’s brain”, and treat all of those conditions as illnesses.

In conclusion, I believe that contemporary psychopathology is a case of finding a hammer and suddenly realizing we are surrounded by nails. If something can be treated as an illness it will be treated as an illness, because that is l’esprit de l’époque. Classifying something as an illness, assigning it a part of the brain, and prescribing it a drug as treatment makes it real and important, so politicians, scientists, and the general public are aware of its existence and direct resources its way. This is why everyday we “discover” that there are more things linked to mental health: Poor housing, poor nourishment, the weather, sexual orientation, racial discrimination, political ideologies… and as there is no psychopathology there’s no limit to psychic pathologies. There’s a drug for everything or a therapy for everything. It’s no coincidence that we now have the most effective treatments in history and the highest rate of accessibility to mental health services ever, but the rates of mental disorders are soaring well. And despite all the advances in psychotherapy and psychopharmacology, no breakthroughs have been made in psychopathology.

I’m convinced that in the future people will look at our ideas on psychopathology as we now look at humorism.

Sources:

APA Definition of Psychopathology: https://dictionary.apa.org/psychopathology

*Psychotherapy just as effective as pharmacotherapy: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5244449/

3
Jump in the discussion.

No email address required.

Respectfully, you are missing the point. I agree with what you say, this is why I started by saying this is not an anti-psychiatry post. What I'm saying doesn't change anything in clinical practice. I never said psychiatry doesn't work, quite the contrary. I explicitly say that it exists because it works. I think you have the right mindset by thinking of it more as an engineering practice, you are actually agreeing with me there.

If you think this is just semantics then fair enough, but I do believe the way we speak shapes the way we understand things. This is what it's about, our concept of psychopathology. You mentioned geriatrics, but geriatrics is an epistemologically sound specialization of medicine. What comes first, the loss of muscle mass or aging? In this case we can clearly stablish an object of knowledge even if we can't do anything to stop aging. The same happens with dementia: What comes first, the loss of cognitive function or the proteinopathy? Note that we don't know what causes the proteinopathy, but we do know for sure that Alzheimer's is a brain disease. So I ask again, if depression is a brain disease, then what comes first? Until we can confidently answer that, we can't really say that depression is a "disease like any other". The fact that changing someone's brain is a treatment for depression does not mean anything, because we could also make a carpenter change their profession by messing with their brain, and as I said, both being depressed and being a carpenter change your brain in predictable and observable patterns.

Let's see an example: There's a person who went through a break up and a year after that they still can't get over it, they feel sad all the time, they don't enjoy anything, and they think they won't be able to live without the person they love. Of course this is a problem, of course this person needs help, and thankfully we have the means to thelp them, that's wonderful. But are they sick? Is this because of a problem in their brain? We don't know. Unless we do, on which case you'd have to illustrate. This is a question that nobody is asking because they are busy choosing who gets the zap, as we cannot give psychiatric treatment or psychotherapy to everyone in the world, even though most people culd benefit from it at some point in their lives. But I do believe it is a question worth asking, even if it were only out of curiosity for the human condition.

I never implied this is some sort of forbidden knowledge, and this is what baffles me the most. Even though it is evident that there is no such thing as psychopathology, most people act as if there were. I know this doesn't really change anything but I can't understand it, this is what moved me to make this post. I also never implied psychiatrists are witch-doctors, because even though psychiatry has epistemological issues it is still a legitimate branch of medicine because of the fact that it works. Note that I'm saying that psychopathology does not exist, not that psychiatry does not exist, because it seems as one can exist without the other. I would prescribe you a text by Georges Canguilhem called What is Psychology, it is about psychology but many of the issues he points out are also applicable to psychiatry.

Regarding the blog by Dr. Sisskind, I fail to see how his argument changes anything. He's a psychiatrist (I think?) talking about a working definition of mental illness that serves his profession. If there's anything specific you think is relevant then please point it out. But the article does talk about evolutionary psychology and oh boy do I have something to say about that. I will make a post about it soon, but for now let's just say that it tries to explain something we don't understand using something we understand even less. It is another of those things that I feel like everyone's playing a prank on me by believing it's legitimate. The good thing is I can rest assured knowing that I'll have psychiatrists to take care of me.

I'll end with a fun fact: Psychiatrists are called "shrinks" precisely because they were in love with Freud, or as Chesterton and you pronounce it, Fraud. This is of course a translation issue, and the anglicized version of Freud has long been proved to be a flunk.

This is what it's about, our concept of psychopathology. You mentioned geriatrics, but geriatrics is an epistemologically sound specialization of medicine.

I am confused by what standard geriatrics achieves legitimacy and psychiatry doesn't.

You seem to think that the mind is a nebulous entity, and thus if a disease can be attributed to the brain, it's the domain of neurology. All well and good, but diseases of aging is an even less clear-cut specialization.

At the very least, the causality of:

What comes first, the loss of muscle mass or aging? In this case we can clearly stablish an object of knowledge even if we can't do anything to stop aging.

Aging is a multi-causal process. Everything from telomerase degradation to oxidative stresses, super-exponential degradation in the surveillance capabilities of the immune system such that after a surprisingly tight threshold everything starts breaking down.

But each of the component aspects are within the domain of independent specialities. If you find this an acceptable state of affairs, then what's wrong with psychiatry?

All the patients have in common is that they're old. At that point, a domain specialist who works with the diseases predominantly found in the elderly becomes valuable. All the more when the emphasis switches from cure to comfort, as is often the case.

What comes first, the loss of cognitive function or the proteinopathy? Note that we don't know what causes the proteinopathy, but we do know for sure that Alzheimer's is a brain disease. So I ask again, if depression is a brain disease, then what comes first? Until we can confidently answer that, we can't really say that depression is a "disease like any othe

The question remains that whether comparing the onset of cognitive dysfunction or protein accumulation is even the right question to be asking. Hell, very recently there's been conjecture that it's a prion illness off incidents of what are attributed to horizontal spread, and neurosurgeons get Alzheimers at higher rates, potentially by aerosolized spread. Still a brain disease of course, but by god is not simple or the billions we've spent would have solved it.

My central thesis is that it is irrelevant how you class depression, since it's not clear to me what you mean by it being/not being a "disease like any other". Diseases are very heterogeneous, and attempting to do a neat taxonomy is asking for brain worms yourself.

Let's see an example: There's a person who went through a break up and a year after that they still can't get over it, they feel sad all the time, they don't enjoy anything, and they think they won't be able to live without the person they love. Of course this is a problem, of course this person needs help, and thankfully we have the means to thelp them, that's wonderful. But are they sick? Is this because of a problem in their brain? We don't know. Unless we do, on which case you'd have to illustrate.

I once again claim that is an irrelevant question. What does matter, both to the doctor and the patient, is whether any interventions are warranted.

Can we expect SSRIs to work? Are they worth trying with even the low success rates and effect sizes here, keeping in mind unpleasant side effects? Can we expect that the patient will recover with therapy alone?

These are questions that have relatively clear, if probabilistic answers.

The question of whether or not he's "sick" is just a dangling pointer which, occasionally, can be useful to resolve, especially when it comes to the purpose of insurance. To ascribe it much greater significance is pointless.

Imagine Lionel Messi, who would have been a dwarf (or very short) without HGH injections. Was he particularly happy about it? Was he sick? There are no clear cutoffs beyond convenience. In his case, any imaginary unhappiness was solved with a jab and superstardom, but he could well have been a depressed short dude really good at football and bitter about never making it.

At least in depression, we make an effort to look for proximate causes, if your mom dies, it is entirely reasonable to be sad for weeks or months. Not years on end. In the latter case, therapy and meds will likely help. Hence the label of depression slapped on you.

Is this because of a problem in their brain? We don't know

I think we do know. Electroconvulsive therapy works on the brain. It produces obvious results on the EEG and the patient's self-reported happiness, as I've linked before.

If it looks like a duck, quacks like a duck, and is amenable to birdshot, you probably have a duck.

As I've previously discussed, it is entirely possible there are contributing factors like the gut-brain axis and systemic inflammation, but the ultimate cause is in the brain, otherwise acting on the brain should not be expected to have such clear effect, especially when more indirect routes fail.

This is a question that nobody is asking because they are busy choosing who gets the zap, as we cannot give psychiatric treatment or psychotherapy to everyone in the world, even though most people culd benefit from it at some point in their lives. But I do believe it is a question worth asking, even if it were only out of curiosity for the human condition.

Once again, I say that pragmatism reigns. Is obesity a disease? That's certainly a debate for the ages. But if Ozempic was shown to be cost-effective, and a net return on investment, if insurance was arbitrarily forced to accept obesity as a "disease" and cover it, that would be the sensible thing to do. If it saved insurers on future costs, they'd spring for it themselves. Resource constraints are independent of whether or not it meets your standards of illness. I agree there isn't enough money to give everyone a shrink on call, but whether or not we should aim for that is a matter of pragmatic tradeoffs and not semantics.

I never implied this is some sort of forbidden knowledge, and this is what baffles me the most. Even though it is evident that there is no such thing as psychopathology, most people act as if there were. I know this doesn't really change anything but I can't understand it, this is what moved me to make this post. I also never implied psychiatrists are witch-doctors, because even though psychiatry has epistemological issues it is still a legitimate branch of medicine because of the fact that it works. Note that I'm saying that psychopathology does not exist, not that psychiatry does not exist, because it seems as one can exist without the other. I would prescribe you a text by Georges Canguilhem called What is Psychology, it is about psychology but many of the issues he points out are also applicable to psychiatry.

I apologise if I came across as implying you held those views, those were more overarching observations rather than specific critique.

I think psychopathology is a useful concept, even if it's a placeholder. To come back to LLMs, as is my habit, there are behaviors that are helpfully described with short labels, such as "mode collapse" and "hallucinations" even if we don't know precisely why they happen on the algorithmic level, even if the system is transparent to us. We can use these semantic labels, they are helpful in describing a phenomenon. The human brain is nowhere near as debuggable, so given its complexity its no surprise that the peccadillos of the software layer might be usefully tackled while (tentatively) ignoring the hardware, i.e therapy. To further pump the intuition here, it is not necessarily the case that a malfunction on the software layer is attributable to an error in the hardware, and a computer, be it silicon or meat, can well be functioning fine at the bottom but break above. Now, even this is physical. A buggy piece of software on an SSD is still physical, in that if you had enormous patience and an electron microscope, you could look at the system and find the faulty distribution of electrons in a NAND chip and fix it that way. That is self-evidently a terrible way to debug software, even if it is doable.

Similarly, while "mental" illnesses are obviously an abstraction on top of whatever the hell is going on in the brain, and then down to the quantum foam, being an abstraction is not a bad thing, especially when as I've shown, even understanding neurological underpinnings of some diseases hasn't helped so far.

We gave people ECT well before we discovered the EEG changes, because their psychopathological symptoms improved and that was good enough.

Regarding the blog by Dr. Sisskind, I fail to see how his argument changes anything. He's a psychiatrist (I think?) talking about a working definition of mental illness that serves his profession. If there's anything specific you think is relevant then please point it out. But the article does talk about evolutionary psychology and oh boy do I have something to say about that. I will make a post about it soon, but for now let's just say that it tries to explain something we don't understand using something we understand even less. It is another of those things that I feel like everyone's playing a prank on me by believing it's legitimate. The good thing is I can rest assured knowing that I'll have psychiatrists to take care of me.

I am aghast that someone has washed up on these shores and not heard of Scott. I suppose weirder things have happened haha.

I would strongly advice you read the entire article. It is enlightening, and if you have yet to enjoy the glory of his work, well I envy you for it.

In case it isn't obvious, I agree with plenty of the things you've said. The thing is, there are aspects where I do disagree, and in several cases, it's from a pragmatic viewpoint.

It seems clear to me that depression is primarily a disease of the brain, even if other things can contribute.

Both neurologists and psychiatrists grapple with dysfunctions of the brain, but at different levels of abstraction. That's the difference between an electrical engineer and programmer. Subsuming psychiatry, or declaring psychopathology useless achieves nothing, because all it does is offload the work to another branch, who will, because the incentives are so obvious, immediately create neurologists-who-are-psychiatrists-in-all-but-name. Similarly, a sufficiently talented (and insane) EE could create a large program, nay, any program by fiddling with physical bits, but there exist clear reasons not to do that, and benefits from having them be different specialities, even if they work on the same "hardware".

Further, arguing about semantics as to whether depression is a "disease" or a "mental" illness achieves nothing.

Is it a useful signifier that constrains expectations? In other words, if I tell you I am depressed, does that provide useful information? Yes. (For the record I am, and the first and second line psychiatric interventions have failed me, I am seriously considering ECT).

Does it give me more information about whether or not the tools at my disposal can help a patient? Yes. If you meet the requirements on one of the many depression indices, or the DSM criteria, then "medicine for depression" will likely help.

There are no more questions left to answer that matter. Perhaps if there's political valence, and people wish to wrangle insurance companies or governments to provide therapies of their choice, such as gender-affirming care (let's not go into that can of worms, I beg you). But even then that's just a matter of how the die fall and practicality. The wine sparkles regardless of whether you label it champagne.

I don't understand why you keep trying to defend psychiatry when nobody's attacking it. I guess you wrote your answer as you read the text because you literally quoted my saying that psychiatry is legitimate. Psychiatry may be quircky and not as epistemologically sound as other branches of medicine, but it is still medicine, still scientific, and still helpful. I would say though that psychopathology is a placeholder. I understand that you have a pragmatic point of view, and you really should. This doesn't really change anything for psychiatrists, and they don't need to worry about this, so I thank you for taking the time to engage in this discussion. But should we really be satisfied by having a placeholder instead of a psychopathology? I don't know if you agree that your belief that depression is a brain disease is just that, a belief, but I believe that's the case because the evidence I've seen is not conclusive. Yes, it's impossible to be depressed without having a brain and depression is something that exists and changes your brain, but here is when it's important to think what comes first. We can agree that treating people with depression should be our first and foremost concern regardless of existential questions, but why stop at that? Why not try to understand what's going on? This shouldn't change the attention and the care we give to depressed people, but it can help us think more acurately about the problem and who knows, maybe even come up with more effective solutions in the future.

Furthermore, I think that believing that psychopathology is something different than a placeholder opens room for all sorts of abuses. Psychiatrists are scientists and know the limitations of their discipline (or at least they should), but psychopathology is being used in all sorts of contexts where it has no business whatsoever, and this is in part because it is an epistemologically bankrrupt concept. Just as a currency undergoes inflation and looses value, concepts that are no rigorous enough are more likely to be overused. Psychologists are the worst offenders here, or maybe it's just my perception because I'm a psychologist myself and of course I know many people in my profession, so I'm surrounded by people who very loudily make all sorts of claims. The best psychologists I know are aware that psychopathology is a placeholder, but most don't or ignore that fact. And as psychologists feel insecure about the scientific status of their discipline they overrely on psychopathology to sell their services.

There is something very funny about the history of psychology, because as you must know computers where made with the specific objective to imitate human thought. But then in the 70's a bunch of psychologists saw computers and were astonished at how much they reasembled human thought, and came to the conclusion that the human mind works like a computer. I'm personally against the expression "Artificial Intelligence" because computers are neither intelligent nor dumb. They do what they are programed to do. An animal, for instance, can be intelligent or dumb because it is directly involved in the outcome of its decissions, and they can be wrong or right. Computers are never wrong, therefore they lack the ability to be implied in their decissions. So even if LLMs resemble human speech, we would be wrong to believe that speaking to an LLM is the same as speaking to a person. In that sense, just the fact that we can treat depression as a brain disease does not mean that it is a brain disease. This is only technically correct because it ignores the problem by fixing over it.

We can agree that treating people with depression should be our first and foremost concern regardless of existential questions, but why stop at that? Why not try to understand what's going on? This shouldn't change the attention and the care we give to depressed people, but it can help us think more acurately about the problem and who knows, maybe even come up with more effective solutions in the future.

Please don't interpret my fervent appeals for pragmatism as a lack of curiosity. If I was genuinely uninterested in such matters, I wouldn't even know the little tidbits of information I've sprinkled in! They're not in medical textbooks or my exam curriculum for sure.

Knowing more would be great. But do not expect that to necessarily mean that psychiatric treatment has firmer grounding. A lot of shit works and doesn't work and we don't know why. As with autism, even when we know why, we can't fix it (without more advanced gene therapy).

Psychiatrists are scientists and know the limitations of their discipline (or at least they should), but psychopathology is being used in all sorts of contexts where it has no business whatsoever, and this is in part because it is an epistemologically bankrrupt concept

Psychiatrists aren't scientists! Doctors, as a matter of course, are not scientists! Some of us do research and clinical studies. That isn't our core responsibility, and most doctors you see have no papers to their name.

We are engineers. We try and fix things, and if a tool works, it works. That does not stop us from seeking better tools.

I looked at the Wikipedia article again on psychopathology, and as far as I can tell, it is an entirely benign subject and I am fundamentally confused by accusations of it lacking epistemically bankrupt concept.

Psychopathology is the study of abnormal cognition, behaviour, and experiences which differs according to social norms and rests upon a number of constructs that are deemed to be the social norm at any particular era.

Biological psychopathology is the study of the biological etiology of abnormal cognitions, behaviour and experiences. Child psychopathology is a specialisation applied to children and adolescents. Animal psychopathology is a specialisation applied to non-human animals. This concept is linked to the philosophical ideas first outlined by Galton (1869) and is linked to the appliance of eugenical ideations around what constitutes the human.

Later:

Psychopathology can be broadly separated into descriptive and explanatory. Descriptive psychopathology involves categorising, defining and understanding symptoms as reported by people and observed through their behaviour which are then assessed according to a social norm. Explanatory psychopathology looks to find explanations for certain kinds of symptoms according to theoretical models such as psychodynamics, cognitive behavioural therapy or through understanding how they have been constructed by drawing upon Constructivist Grounded Theory (Charmaz, 2016) or Interpretative Phenomenological Analysis (Smith, Flowers & Larkin, 2013).[7]

CBT is slightly better than the alternatives. I am exceedingly dubious that what it claims are the underlying mechanisms are what's actually going on, but it still works, and beats placebo and (barely but significantly) the alternatives.

I have done a deep dive on the topic myself, but I'd have to dig very deep into my profile to find it.

But even then, the existence of flawed models (which still do useful things) is no more a scathing critique than someone claiming that the Standard Model being unable to explain the overwhelming majority of the matter or energy in the universe makes Physics as a whole illegitimate. We know it's flawed. It's still useful.

There is something very funny about the history of psychology, because as you must know computers where made with the specific objective to imitate human thought. But then in the 70's a bunch of psychologists saw computers and were astonished at how much they reasembled human thought, and came to the conclusion that the human mind works like a computer. I'm personally against the expression "Artificial Intelligence" because computers are neither intelligent nor dumb. They do what they are programed to do. An animal, for instance, can be intelligent or dumb because it is directly involved in the outcome of its decissions, and they can be wrong or right. Computers are never wrong, therefore they lack the ability to be implied in their decissions. So even if LLMs resemble human speech, we would be wrong to believe that speaking to an LLM is the same as speaking to a person. In that sense, just the fact that we can treat depression as a brain disease does not mean that it is a brain disease. This is only technically correct because it ignores the problem by fixing over it.

The human brain is a computer. It just happens to not adhere to the Von Neumann architecture as most electronic ones do, but it is possible to simulate a single biological neuron with ~1000 artificial neurons in the ML sense..

Further, the human brain is bound by physics. Evidence otherwise is sorely lacking. We can simulate physics very well, at least if you don't want to use QM on macroscopic structures at typical temperatures, but only because that is a computationally difficult thing to do, not because it is fundamentally impossible to model.

Humans do what we were programmed to do. We just had a Blind Idiot God as a programmer, who had to bootstrap a VERY complex computer from a surprisingly small amount of code (DNA and epigenetics).

"Evolution, please grant me intelligence."

"To get more bitches and gather more berries?"

"Yes"

Invents condoms and ozempic like a boss 😎

You mistake the difficulty in unpacking the blackbox of human cognition as proof that it can't be unpacked. That is a grave error indeed.

Well, I'm at a bit of a loss here. What do you think engineering is if not the application of natural sciences? It's not the fairy-loving-godmother that engineers things. Claude Bernard would vivisect you for saying that doctors are not scientists, and then Kraepelin and Jaspers would electroshock some sense into your computer for saying psychiatrists are not scientists. Psychopathology could be a bening illusion, but the fact that believing in something that does not exist doesn't hurt anyone is no argument to hold that belief, especially when we can be just as effective without it as we are now.

For the rest, last time I checked the preferred psychotherapies were third wave behavioral therapies like Behavioral Activation and Acceptance and Commitment Therapy. Even if in practice they apply cognitive techniques, they ultimately are followers of Skinner and therefore assert that neither the mind nor cognitive mechanisms exist. They also sustain that depression is not a brain problem but a behavioral one, and many (like Marino Perez Alvarez) go as far as to question the relevance of psychopathology. I don't particularly adhere to this school of thought but I do agree with their conclusion regarding psychopathology.

Lastly, I don't assert the impossibility of unpacking the blackbox of human cognition. What I said is that we are yet to find definitive proof that it is possible, so it's not time to make claims about what humans are programmed to do or not to do quite yet. I would also say, paraphrasing an scholia by Nicolás Gómez Dávila, that if the universe were so artless as to be comprehensible to the human brain, then it would be immeasurably and unbearably boring, and we would have legitimate reason to feel disappointed. It's hilarious that everytime humankind creates some wacky artifice it believes it holds the key to understanding the universe. It happened with fire, writing, mechanical watches, and now computers, and so shall it be per saecula saeculorum. I guess there are computers everywhere for those with the eyes to see.

You mistake the difficulty in unpacking the blackbox of human cognition as evidence that it can't be unpacked. That is a grave error indeed.

Can you at least agree to the following:

  • that "unpacking the black box of human cognition" would involve the practical ability to have granular, read/write access to an actual human mind.

  • That no read/write access to a human mind has ever been demonstrated, nor has any meaningful progress toward such a capability ever been demonstrated.

  • That many people have previously claimed to be capable of demonstrating such access, or else of generating the capability to demonstrate such access, that their claims have been taken seriously, been tested rigorously, and have uniformly failed those tests.

  • That current iterations of the claim, such as yours here, no longer make straightforwardly testable predictions of the sort that were common from prominent scientists and "scientists" over the last century.

  • That the actual engineering we do with humans in fields like teaching, law and order, political organization and so on, all operate as though the self is not bound by physics in the way you believe it must be. That is to say, when a machine does something wrong, we go for the person who programmed it, but when a person does something wrong, we punish them directly. When we try to shape humans, we do so with techniques working from the assumption that the individual is autonomous and possessed of their own free will in all practical senses of the term.

I'm going to try to restate what I see as your position, before responding to it:

With regard to "read/write access", it appears that you don't mean it in the basic sense of "Do things that inform you of the content"/"Do things that change the content", but rather you specifically mean "outside of the normal IO channels". This is because free will is the big thing here.

Because I have free will, nothing you can do through my normal IO can control me. You can present evidence, and I'm free to veto the idea that it's even evidence. You can listen to what I choose to say -- or choose to think at your implant -- but you can't keep me from lying and you can't detect when I am. This fundamentally changes things because it means you cannot neglect my will; I am in control of how things pass into/out of my mind, and until you can go around my normal IO channels you need my buy in unlike with ships and planes who don't get a say in things. As a result, the normal paradigm of engineering ain't gonna work.

For "read access" to change things here, you would have to be able to not just read my surface level outputs but also the deep generating beliefs with reasonable resolution -- at least to the degree that "lie detection" can be done reliably. For "write access" to change things you would have to be able to write my conclusions not just impressions.

And reliable lie detection doesn't exist. It's impossible to "hack" into someone's mind in a way that bypasses the individuals say on things, and do things like "making a Christian into an atheist" or "implant a memory". Been tried, failed.

Is this essentially correct, or am I missing a key distinction here?

Because it looks to be like you're noticing that there's almost always a little white in a grayscale world and that attempts to do "pure black" aren't super successful, and then making the mistake of declaring everything to be "white" because it's "not [completely] black".

There's a lot of gray area out there, and some of it quite dark.

Is this essentially correct, or am I missing a key distinction here?

You nailed it. And specifically this part here:

This fundamentally changes things because it means you cannot neglect my will; I am in control of how things pass into/out of my mind, and until you can go around my normal IO channels you need my buy in unlike with ships and planes who don't get a say in things. As a result, the normal paradigm of engineering ain't gonna work.

...And further, that this view is supported by an overwhelming amount of evidence from every facet of human behavior, and every claim to the contrary is either unfalsifiable or has been falsified, yet people continue to insist otherwise, in a way identical to Sagan's invisible dragon. This isn't because they're stupid, it's because Sagan's invisible dragon is describing something irreducible about how humans reason. Reasoning is not simply doing math on accumulated evidence. The evidence is weighed and assessed in reference to axioms, and those axioms are chosen. You can choose to uncritically accept one provided to you by others, or you can choose to look at an arbitrary amount of arbitrarily-selected evidence until you arbitrarily decide that no more evidence is needed and a conclusion can be drawn, or you can take certain positions as self-evident and then prioritize the evidence that is compatible with them.

That last option is how people end up believing in Determinism, despite zero direct evidence in favor of determinism and a lot of evidence against it: they've adopted Materialism as an axiom, and Materialism requires Determinism. Any evidence against determinism is likewise evidence against Materialism, but because Materialism is an axiom, evidence against it is simply deprioritized and discarded. This is not objectionable in any way, and it is the only method of reason available to us. The problem comes from people ignoring the actual operation, and substituting it for some fantasy about reason as deterministic fact-math, as though their choices were not choices, but predetermined outcomes, and anyone who doesn't choose the same axioms is simply not reasoning properly.

There's a lot of gray area out there, and some of it quite dark.

I'd be interested on the grey you see. Torture regimes observably fail. Totalitarianism observably fails. Power slips through the fingers, despite all efforts to the contrary. People have been trying to reduce humanity to an engineering discipline for three hundred years running, and they've failed every time. Again, that's not conclusive proof that they'll continue to fail indefinitely, but looking at the historical record, and accounting for my understanding of technology that actually exists, I like my odds.

Saying "torture regimes fail" is like saying "cars fail". Of course they do; entropy is a bitch. But cars also work for a while before breaking down. It's neither the case that "Torture regimes never fail" nor that "Torture never accomplishes anything for the torturer". It's a question of "to what extent", and "in which circumstances?".

The difficulty of "engineering people" doesn't require determinism to be false, just that we have imperfect knowledge of what the determinants are. You'll have a hard time getting into my safe, despite the combination lock being entirely deterministic. If you were to have a sufficiently good model of the internals, you'd know just what to do in order to get the desired response 100% of the time. If you have a partial model, you only get partial results. It's just a matter of entropy.

Similarly, one's ability to persuade a person depends strongly on their ability to predict what kinds of things this person would veto as "not evidence" and what they would accept. Even if we assume human beings are 100% entirely deterministic, in order to get 100% results we need to have a complete model of the deterministic algorithm which changes by the moment as new experiences accumulate. We don't have to posit that a human mind is fundamentally non-deterministic in order to recognize that perfect determination is going to be an infeasible practical problem -- hence the "humans need to be treated like people" abstraction.

But what if we don't care about perfect 100% results? What if we don't limit ourselves to zero chance of failure, zero limit to the reach of control, zero limit to the duration of control?

Things get a lot more feasible. Now we don't have to contain a 100% faithful and ever changing model of the person we're attempting to "control" -- or perhaps more fittingly "manipulate". We just need to create a situation where we can reduce the entropy enough that we can get the results we're looking for before the entropy compounds and bursts through the seams.

And sure enough, manipulation works. Not well enough to get you a stable and fulfilling marriage into old age, but people do get manipulated successfully enough that it harms them and benefits their manipulators -- in the short term, at least. Serial killer Ed Kemper used to look at his watch and mutter something about not knowing if he had time to pick up a hitch hiker as what PUA would call a "false time constraint". Because the interaction of "picking up a hitchhiker" is such a simple low entropy scenario it doesn't matter if he can fully predict everything because all he needed to do was find that one little regularity that allowed him to "social engineer" some victims into his car.

A much more extreme version of this "funnel people into low entropy and take advantage of superior knowledge of the terrain" is hypnosis. Provided that the "subject" agrees to hypnosis and isn't creeped out and on guard, hypnotists can take advantage of a fairly low entropy set of possible responses to engineer ways to get people into states where their guards are predictably lowered even further, and then do stuff that bypasses the persons conscious will completely. Implanting fake memories is easy, and doesn't even require hypnosis. Implanting other ideas is doable too, as is prying out secrets that the person really does not want shared, and removing the person's ability to speak/move/remember basic things. The stuff that's possible with hypnosis is legit scary.

When you ask rhetorically "Can you make a Christian atheist?", my answer is "Provided they volunteer for hypnosis, yes, actually". I have run that exact experiment, and I forget my exact success rate but it was something like six attempts and five successes. The effect lasted about one to three months depending on the person, then they ended up reverting back to believing in God.

So is that "success" or "failure"? You could look at the bright side and note that it didn't last forever, or you could look at the dark side and notice that it worked remarkably reliably, for months without a shred of reinforcement, and with a very unsophisticated strategy and zero attempt made to make the effect robust.

It just comes down to what you're trying to justify. "Attempts to write the bottom line first and then engineer a way to manipulate people into doing what you want are unwise and ineffective in the long term and large scale", absolutely. "I know I saw that picture, because I remember it, and it's impossible to implant memories against a person's will", no.

As it applies to this conversation, it seems that the relevant question is "Can 'engineering' mindsets be used effectively to do things like help people with psychiatric conditions", I'd say "Yes, absolutely" -- but I'd also challenge your presupposition that "engineering" requires one to work around rather than with people's will. People's will can be predictable and controllable too, to an extent. Incentives shape wills, because people aren't dumb. If you show me a better way to get to work, I'll take it because it gets me what I want. Free will, sure. But also deterministic -- and determined by what gets me what I want. If you plug your fence into the electrical outlet, I won't touch it twice. Call it "operant conditioning"/"reprogramming when a person did something wrong", or call it "voluntarily deciding not to get shocked again". To-may-to, to-mah-to.

It's neither the case that "Torture regimes never fail" nor that "Torture never accomplishes anything for the torturer". It's a question of "to what extent", and "in which circumstances?".

I am pretty confident that people can't do much better with a torture regime than we've seen them do in the past. That is to say, I think the problem is pretty well bounded by irreducible limits on human agency and capacity, and I do not expect this to change in the forseeable future. Notably, if Determinism could be proven, if we really could engineer practical mind control and mind-reading, this would no longer be the case, and much worse torture regimes would seem a very likely outcome.

The difficulty of "engineering people" doesn't require determinism to be false, just that we have imperfect knowledge of what the determinants are.

Suppose I claim to be able to predict the outcome of coin flips. You have me call a hundred coin flips. If I get 90 right, it's reasonable to say I'm on to something, even if I don't have all the kinks ironed out. If I get 56 right, the reasonable conclusion is that I got lucky. If Determinism could get 90 out of a hundred, or 75, or even 60, I think that would be reasonable evidence that it was correct. My read of the historical evidence is that the outcomes of attempts to engineer from Determinism have no correlation with the goals of the engineering.

I'm not asking for 100% results. I'm asking for any results that are clearly distinguishable from non-Determinist explanations.

We don't have to posit that a human mind is fundamentally non-deterministic in order to recognize that perfect determination is going to be an infeasible practical problem -- hence the "humans need to be treated like people" abstraction.

It is important, I think, to recognize that this is Determinism Of The Gaps. Previous iterations of Determinism did not believe that perfect determination was practically infeasible, were pretty clear that humans did not need to be treated like humans, and in fact believed that they had all the tools at hand to arbitrarily shape humanity however they wished. Their beliefs were high-status, received very significant social, political, and financial backing, and still failed utterly by their own stated standards. Hubris is a human constant, but it does not appear to me that most Determinists recognize the previous falsifications and the subsequent general retreat into unfalsifiability, which I think is a serious red flag for the theory in general from an empirical perspective, and also a telling error in one's understanding of history, of where we are and how we got here.

Things get a lot more feasible. Now we don't have to contain a 100% faithful and ever changing model of the person we're attempting to "control" -- or perhaps more fittingly "manipulate". We just need to create a situation where we can reduce the entropy enough that we can get the results we're looking for before the entropy compounds and bursts through the seams.

Manipulation and deceit aren't novel, though, and no one is confused over whether they exist. And in fact, we generally expect people to resist and avoid such attacks, and consider them at least partially responsible if they fail to do so.

Provided that the "subject" agrees to hypnosis and isn't creeped out and on guard, hypnotists can take advantage of a fairly low entropy set of possible responses to engineer ways to get people into states where their guards are predictably lowered even further, and then do stuff that bypasses the persons conscious will completely.

I don't know much about hypnosis, so this is both interesting and directly applicable to the issue at hand. My rough understanding is that hypnosis is easily resisted, and that you can't get the subject to do anything they actually don't want to do. Is this incorrect?

The stuff that's possible with hypnosis is legit scary.

I'm prepared to believe it. Where's the proof? Offhand, I can think of several obvious real-world applications for a workable method to alter someone's mind in a controllable fashion, just off the top of my head:

  • Treating addiction seems stupidly obvious. Does hypnosis reliably nullify addictions to alcohol, tobacco or narcotics? Does it improve weight-loss outcomes?
  • Is hypnosis a reliable tool for criminal interrogation? How about for depositions and so forth in civil lawsuits? In a lawsuit with conflicting claims, why not simply require the parties to undergo hypnosis so that any inconvenient facts they're hiding can be teased out?
  • Marriage counseling seems like an obvious use-case. When you have people who want to get along but are having conflict, why not just smooth all that out with a little touch-up? I'd imagine people would volunteer for this happily if it could be demonstrated to work. This would be an example where you would even expect the subjects to be enthusiastically cooperative.
  • Any sort of trusted position, from judge to police officer to accountant to banker to CEO, seems like it would be a good candidate for either will-compromising verification of good conduct, or for induced commitment to good conduct.
  • Education: improve study habits? Suppress disruptive behavior? Get kids to get along with each other?

...The short version is that if the obvious implications of what you're saying were true, I'd expect the world to look very different from how it does. For a start, I'd expect hypnotists to be as highly-paid and multitudinous as tech workers. They don't seem to be, though. Why? Hypnosis has been studied and practiced for at least a century, likely much longer. Where's the hard takeoff in society-restructuring capability?

When you ask rhetorically "Can you make a Christian atheist?", my answer is "Provided they volunteer for hypnosis, yes, actually".

Most interesting. Could you describe this process in more detail? Why does it wear off? What do you think the wear-off implies? Did they know you were going to try to do it?

...but I'd also challenge your presupposition that "engineering" requires one to work around rather than with people's will.

Again, how do we distinguish "cooperative engineering" from just regular willful "cooperation"? People can choose to submit, to follow orders, to obey, if they want to. The Determinist argument was that you could force them to obey, and even force them to want to.

If you show me a better way to get to work, I'll take it because it gets me what I want.

Will you? Why do you suppose teaching in an inner-city school sucks so hard? Aren't the teachers trying to offer the students better ways to work?

But also deterministic -- and determined by what gets me what I want. If you plug your fence into the electrical outlet, I won't touch it twice.

You might touch it twice to prove how tough you are to your friends. You might sue me for not posting proper signage, or go off in deep contemplation about how things aren't as they appear. You might fly into a rage or burst into tears. You might go and by insulated wirecutters and cut the fence to bits. you might piss on it to see what happens. You might get angry and cuss me out. You might burn my house down.

You probably won't touch it twice. People do indeed respond to incentives. They don't respond predictably, or controllably.

I have not claimed that people can't modify other people's behavior. My argument is that such modification of others is an art, and very much not a science. It is not predictable, controllable or repeatable in any but the very loosest senses of these words, and it does not generalize across all humans well at all. My evidence is, again, any facet of human interaction you'd care to look at. Education, law enforcement, romantic relationships, interpersonal conflict, employer/employee relationships, politics, any form of human organization... all of these would operate in a vastly different way if modification of others were a science. They don't, which is very good evidence that it isn't.

Further, I do not think that this evident state of affairs is going to change within the foreseeable future.

More comments
  1. To completely unpack it? Yes. The ideal would be to read individual molecules down to the limits of the Uncertainty Principle. Luckily even noisy signals like external EEGs provide useful data.

  2. https://youtube.com/watch?v=vpzXI1hlujw is a conclusive rebuttal. BCIs have been a thing for decades. We can reproduce imagery from dreams and even capture the mind's eye with surprising clarity, with non-invasive techniques to boot. I can write to your brain right now, just give me a scalpel, a needle, an electrode and medical indemnity documents.

  3. No. Or at least I don't know of any such people, I have no affiliation with them, and their failures do not impede progress in the field.

  4. No. If we emulate a human being, via brain scan or some high bandwidth side channel, and it works, then voila. That strikes me as about the same level of difficulty and temporal separation from today as a Victorian or early 20th century scientist theorizing about space flight and actual orbit. Or at least it would if not for AGI being imminent, which will likely solve the problem even if we didn't build it as a replica of the human brain (though inspiration was taken).

  5. What of it? Many human institutions are built on faulty foundations. "All Men are created equal"? They can pull the other one, even if it's a useful legal and social fiction. Things do not need to be true to do useful things, a monarchy backed by divine will has fuck-all going for it, it still manages to raise armies, collect taxes and build roads. You can scare a toddler with ghost stories and stop them wandering out to be eaten by a coyote.

Luckily even noisy signals like external EEGs provide useful data.

...People have claimed to have developed reactionless thrusters before. A test I've heard proposed is to hang the thruster and its power source from a pendulum, inside a sealed plastic bag, and then show it holding the pendulum off-center. In a similar vein, here's some proposals for similar tests of read/write access to the human mind: a working lie detector, love potion, mind-reader, or mind-control device would all be obvious demonstrations of the basic capability. Do you believe any of these exist, or that they will exist in, say, the next few years? If not, why not?

ttps://youtube.com/watch?v=vpzXI1hlujw is a conclusive rebuttal.

It certainly doesn't seem to be. I'm all for it, but this is reading output, not even input/output, and certainly not read/write.

We can reproduce imagery from dreams and even capture the mind's eye with surprising clarity, with non-invasive techniques to boot.

Could I get a cite on this? I would like to see some actual captures from dreams or the minds eye, because I'm pretty sure such things don't exist in the sense I understand the terms. I'm interested in being proved wrong, though.

I can write to your brain right now, just give me a scalpel, a needle, an electrode and medical indemnity documents.

You can damage my brain right now, or possibly jam it. You can't write on it in any meaningful fashion, as far as I know. Again, if you could, that would necessarily imply the present existence of mind reading and mind control, correct?

No. Or at least I don't know of any such people, I have no affiliation with them, and their failures do not impede progress in the field.

Marx? Freud? BF Skinner? Watson? As for affiliation with them, do you think the "God of the Gaps" is a reasonable criticism of Christianity? If so, is it just Christians who shouldn't collectively retreat to unfalsifiability, or are such collective retreats in the face of contrary evidence generally bad whoever is doing them?

No. If we emulate a human being, via brain scan or some high bandwidth side channel, and it works, then voila.

If. As in, in the future. As in, not in the present. You recognize that we not only can't emulate a human being, but we aren't anywhere close to being able to, right? That the capability you are reasoning from is, at the moment, entirely fictional?

What of it?

All direct evidence available to us shows that the human self has free will: we experience its operation every minute of every day, and can examine that operation in intimate detail. All engineering we do on humans operates on the assumption that free will exists, from education to law to navigating interpersonal relationships. Determinism makes no predictions that can currently be tested. Determinism's previous testable predictions have all been falsified. No engineering we currently do on humans operates according to deterministic principles. All these are facts, to the extent that "facts" can be said to exist.

The fact that all of the above can be so easily ignored can teach one important things about the operation of human reason, and particularly the prime role that free will takes in that operation. You can't be made to draw a conclusion by presented evidence, because the individual will has veto on what is even considered evidence. All beliefs are chosen by distinct acts of the will.

People have claimed to have developed reactionless thrusters before. A test I've heard proposed is to hang the thruster and its power source from a pendulum, inside a sealed plastic bag, and then show it holding the pendulum off-center. In a similar vein, here's some proposals for similar tests of read/write access to the human mind: a working lie detector, love potion, mind-reader, or mind-control device would all be obvious demonstrations of the basic capability. Do you believe any of these exist, or that they will exist in, say, the next few years? If not, why not?

A tailor jumping off the Eiffel tower with a large apparatus of mostly linen did not prevent the invention of working parachutes. The dozens of people jumping off cliffs to their deaths throughout the ages did not prevent heavier than air flight.

There are reasonably robust theoretical reasons to suppose that reactionless thrusters do not work. I wish to see the equivalent for BCIs.

It certainly doesn't seem to be. I'm all for it, but this is reading output, not even input/output, and certainly not read/write.

https://en.wikipedia.org/wiki/Retinal_implant

Foerster was the first to discover that electrical stimulation of the occipital cortex could be used to create visual percepts, phosphenes.[1] The first application of an implantable stimulator for vision restoration was developed by Drs. Brindley and Lewin in 1968.[2] This experiment demonstrated the viability of creating visual percepts using direct electrical stimulation, and it motivated the development of several other implantable devices for stimulation of the visual pathway, including retinal implants

(The eyes are also an extension of the brain from an anatomical and developmental standpoint, but you can write imagery to it directly, as the excerpt showing the effects of stimulation to the occipital cortex shows)

We can reliably produce different sensations and even cause muscular movements. The tech isn't at the point I can make you see HD pictures.

You can damage my brain right now, or possibly jam it. You can't write on it in any meaningful fashion, as far as I know. Again, if you could, that would necessarily imply the present existence of mind reading and mind control, correct?

Now, I can always adjust the voltage on the electrode, the indemnity documents are purely an insurance matter and not because I'm not licensed for neurosurgery. I can reliably induce plenty of somatic sensations in you and show you some really sweet colors, and once I get to the temporal lobe, I can promise meeting God Himself or your money back (and you don't even have to die, temporal lobe epilepsy or stimulation causes religious ecstasy).

We can read minds. See video linked. It literally picks up on his will to move his paralyzed arms and converts that to equivalent mouse movements. We can control minds. Just very crudely. The dude in the video says he intends to cosplay as Professor X, which is hilarious, and also not far from reality. If he can move a mouse cursor, he can move a robot, with his mind, at a distance. This has been done with Neuralink in monkeys, and with other BCIs, also in monkeys.

Mind reading and mind control exist. It's not a psychic phenomenon, it uses Bluetooth.

Marx? Freud? BF Skinner? Watson? As for affiliation with them, do you think the "God of the Gaps" is a reasonable criticism of Christianity? If so, is it just Christians who shouldn't collectively retreat to unfalsifiability, or are such collective retreats in the face of contrary evidence generally bad whoever is doing them?

Umm.. I have said some very uncomplimentary things about Marx and Freud. I believe I called the latter Fraud in a comment as recently as a day ago.

Skinner? I am neutral on him. The Skinner Box is an interesting idea, that's about all I can say from memory.

Watson? Presumably buddy of Crick? What did he do wrong? I have no idea.

I have seen plenty of Christians here happily engage in GOTG reasoning. You don't need to go back 50 years or more to find them. Yes, hiding in the dark and dank corners where you can just barely squeeze your eyes closed against the light of empirical inquiry is a shameful display.

But what's your point? I invite you to show me what I have in common with Marx, Freud, Watson and Crick. I am bemused that there exists a natural category the five of us share. I certainly denounce the former two much harder than I can recall any practising Christian here fighting against GOTG. If so, they have my thanks, I prefer an honest admission that they have an unshakable faith that I don't have to waste time debating rather than having to wait for the inexorable march of progress to squeeze them out of the gratings.

Could I get a cite on this? I would like to see some actual captures from dreams or the minds eye, because I'm pretty sure such things don't exist in the sense I understand the terms. I'm interested in being proved wrong, though.

This A.I. Used Brain Scans to Recreate Images People Saw

If. As in, in the future. As in, not in the present. You recognize that we not only can't emulate a human being, but we aren't anywhere close to being able to, right? That the capability you are reasoning from is, at the moment, entirely fictional?

The line between sufficiently hard scifi and a white paper is blurry indeed.

This is a terrible argument. You are likely using an electronic device that was "fictional" when first imagined to make it.

In the absence of AGI, I think it would take anywhere from 20-40 years to make a high fidelity emulation of a human brain. The bottlenecks are both available compute (the requirements to run a human brain on a computer are large, and the estimates vary OOMs). We also need better scanning tools, right now they're best suited for tiny slices at a time, and you can't do it while the subject is alive. Thankfully the latter is not a strict requirement, and shortcuts in compute and better algorithms probably exist. The record as it stands has long exceeded drosophila and roundworms, and the current SOTA is either an entire rat brain or 1% of a human brain.

If you disagree, please make it clear you're exceedingly confident in what I deem to be a very insane proposal that we will not have OOMs more raw compute in a few decades*, or that scanning techniques will not improve even modestly, given that new advances come out every year or two.

These are all difficult tasks. Hence the long time without AGI helping us. But nothing in my knowledge of physics, chemistry, biology or engineering rules it out, and the EU has enough confidence that it's possible that they spent a billion euros on the Whole Brain Emulation Project and are now working on its successor, EBRAINS. It's at about the same stage as the Human Genome Project was, and look where we are today. (Tbf, they did a terrible job at it. But it's a hard job nonetheless.)

*Short of civilizational collapse and mass extinction.

All direct evidence available to us shows that the human self has free will: we experience its operation every minute of every day, and can examine that operation in intimate detail. All engineering we do on humans operates on the assumption that free will exists, from education to law to navigating interpersonal relationships. Determinism makes no predictions that can currently be tested. Determinism's previous testable predictions have all been falsified. No engineering we currently do on humans operates according to deterministic principles. All these are facts, to the extent that "facts" can be said to exist.

The fact that all of the above can be so easily ignored can teach one important things about the operation of human reason, and particularly the prime role that free will takes in that operation. You can't be made to draw a conclusion by presented evidence, because the individual will has veto on what is even considered evidence. All beliefs are chosen by distinct acts of the will.

Please, not this topic again, I think we've been over this at least twice or thrice and I don't think we've made any progress. Certainly this is a starting point that would require rehashing what was a fruitless debate.

There are reasonably robust theoretical reasons to suppose that reactionless thrusters do not work.

There are theoretical reasons to believe that Brain emulation won't work either. Whether they qualify as "reasonably robust" is a question beyond my purview, but your answers so far lean me more toward thinking so.

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

...As I understand it, this is a paper where some guys took your proposed approach and applied it to a microprocessor, to see if it would work on a system where there were no appreciable unknowns. With a perfect map of the microprocessor and the general tools we have for investigating the brain, the deeper structure of the chip's processes were completely inaccessible to them, even in principle.

We can reliably produce different sensations and even cause muscular movements. The tech isn't at the point I can make you see HD pictures. The tech isn't at the point I can make you see HD pictures.

Even if it were, my argument would be the same: showing me a picture by passing data through the visual wiring is not writing to the brain. Giving me a memory of seeing the picture would be writing to the brain. Ditto for the sensations and muscular movements. I can make you feel sensations and make your muscles move without poking the brain at all.

Once I get to the temporal lobe, I can promise meeting God Himself or your money back (and you don't even have to die, temporal lobe epilepsy or stimulation causes religious ecstasy).

We can already induce the sensation of religious ecstasy through a variety of analogue means. Why would doing it with a needle be significant? Can you make an atheist into a Christian, or a Christian into an atheist? Can you make someone love a specific person, or hate them, in a durable, continuous way?

Mind reading and mind control exist. It's not a psychic phenomenon, it uses Bluetooth.

"Mind reading", as in accessing the self, tapping the internal monologue, viewing memories. "Mind control", as in controlling the self, editing memories, changing how someone thinks.

This A.I. Used Brain Scans to Recreate Images People Saw

If I'm understanding the article's description correctly, they are reading sensory intake data live. That is indeed a very neat development, and not something I would have expected, but it still appears to be in the general input/output regime rather than the read/write regime.

This is a terrible argument. You are likely using an electronic device that was "fictional" when first imagined to make it.

When the guy tried to parachute off the Eiffel tower, he did so because he'd tested the idea with models first and had some direct evidence of the thing he was attempting to harness. My understanding is that we do not have anything like that for the self, the mind, the me inside the brain. We can access the data going in and out of the brain, but to my knowledge we have no insight at all on the actual computation, its operation or its mechanisms. We have matter and energy patterns, and we presume that these must add up to consciousness not because we have any insight into the details of the mechanism, but because Materialism requires that all other theories be discarded. But even if this is true, that is still not evidence that the patterns and structures are tractable under the conditions of the material world, for the same reason that it is probably not possible, even in principle, to accurately predict the weather in St. Petersburg a year from now. In my experience, arguments to the contrary amount to saying that we don't know what the obstacles might be, so there probably aren't any. That is not an example of reasoning from evidence.

Watson?

John Watson, the father of Behaviouralism. His thesis was admirably succicnt:

“Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select — doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”

Like you, he had "evidence" that this was possible: psychological experiments demonstrating conditioning, habit formation, etc. He naïvely extrapolated that evidence well outside its domain, and ignored all evidence to the contrary, and so in time his core claims were thoroughly falsified.

But what's your point? I invite you to show me what I have in common with Marx, Freud, Watson and Crick.

You all share the conviction that the human mind is arbitrarily tractable, controllable, malleable, that Selves can be engineered with no regard to their own will, and despite all evidence to the contrary. Marx thought it could be done through engineering social conditions, Freud through manipulation of the psyche, Watson and Skinner through fine-grained manipulation of the environment, and you think it will be done through nanotech and emulation. A lot of people have believed this idea, especially over the last century or two, based on no testable evidence at all, and a lot of very serious efforts have been made to actually do it, all of which have failed. None of those failures have done a thing to shake the confidence of those who hold to this idea, because the idea is not derived from evidence, but rather from axioms.

I think you're wrong. But I care a lot less about whether you're wrong, than I do about pointing out the mechanics of how beliefs are formed. It should in principle be possible to get you to recognize the difference between "I know how this works because I've personally worked with it" and "I know how this works, because I have a theory I haven't managed to poke holes in yet". But the most recent version of this conversation I had resulted in my opposite claiming both that Determinism was demonstrated by evidence, and that it was impossible even in theory for evidence against Determinism to exist, because it was true by definition. So who the fuck knows any more?

The record as it stands has long exceeded drosophila and roundworms, and the current SOTA is either an entire rat brain or 1% of a human brain.

Are you are claiming that scientists can, right now, emulate an entire, active rat brain? That seems pretty implausible to me, but I stand to be corrected. I'm not confident that "1% of a human brain" is even a coherent statement, that the phrase means anything at all. 1% of what, measured how?

If you disagree, please make it clear you're exceedingly confident in what I deem to be a very insane proposal that we will not have OOMs more raw compute in a few decades*, or that scanning techniques will not improve even modestly, given that new advances come out every year or two.

No, I think you're straightforwardly wrong about what is possible right now, and especially about what it shows. I don't think scientists can "emulate a rat brain", meaning create a digital simulacrum of a rat that allows them read/write access to the equivalent of a live rat brain. I certainly do not believe that scientists can emulate "1% of a human brain", under any reasonable interpretations of those words. My argument isn't that compute won't improve, it's that the mind is probably intractable, and that certainly no evidence of tractability is currently available. I have not looked into either the WBEP or EBRAINS, but I'm pretty confident they don't do anything approaching actual emulation of a human mind.

But nothing in my knowledge of physics, chemistry, biology or engineering rules it out, and the EU has enough confidence that it's possible that they spent a billion euros on the Whole Brain Emulation Project and are now working on its successor, EBRAINS.

Behavioralism probably got a whole lot more than a billion, all told. Marxism got trillions. These ideas ran the world for more than a century, and arguably still run the world, based on zero actual validity or utility.

I was a big fan of transhumanism, once upon a time. I was very big into the idea of brain emulation. I too crave the strength and certainty of steel, much to my wife's chagrin. I used to believe that the brain was obviously a computer, and science would equally obviously reduce its structures and contents to engineering problems in time. But looking back, I think it's pretty clear that this belief came from inference, not evidence, and from some pretty shaky inferences too. As you put it, "nothing in my knowledge of physics, chemistry, biology or engineering rules it out", and I wanted to believe it, and it fit neatly with other things I wanted to believe, so I discarded the contrary evidence and ran with it. That's what people do.

"nothing in my knowledge of physics, chemistry, biology or engineering rules out" my belief in God. Of course, my understanding of God has been shaped by my understanding of all of these, so I effortlessly avoid pitfalls that observably trapped some of my predecessors. In the same way, your belief in the nature of the mind is shaped by your understanding of all these, and so you effortlessly avoid the traps that caught many of your preceding mind-engineer transhumanists. The fact that I don't attempt to argue for young earth creationism doesn't mean I actually have any better an understanding of the reality or fictitiousness of God than those who came before me. In the same way, the fact that you don't think the brain can be engineered by psychoanalysis or socialist revolution doesn't mean you understand the mind better than Watson or Marx or Freud; we didn't derive our understandings from first principles, but from learning from the painful experience of others. Nothing about that indicates any significant ability to get novel answers right ourselves.

Please, please, recognize the difference between "I know this is so" and "I don't know why this shouldn't be so". Both are useful; I argue that both are entirely necessary. But it pays to be clear about which one you're using in a given situation, and to not mix the two up.

More comments

But then in the 70's a bunch of psychologists saw computers and were astonished at how much they reasembled human thought, and came to the conclusion that the human mind works like a computer. I'm personally against the expression "Artificial Intelligence" because computers are neither intelligent nor dumb.

All physical processes can be simulated by a computer. They are all computers. You seem to believe animal brains are special, but they aren't. An Intel CPU or an Nvidia GPU are both made of electrons and quarks, just like the neurons in your brain. Both are the same particles governed by the same laws of physics and just as computable as any other configuration of particles.

Of course animal brains are special, the fact that they can be simulated doesn't make them less so. The fact that you can take a picture of The Garden of Earthly Delights does not make it any less special. If you really want to be a nerd then no, nothing is made of electrons and quarks. In fact, nothing is made out of nothing and nothing exists at all, and if an atomic bomb were to explode right here beneath my chair nothing would change and everything would remain exactly where it is and where it has always been.