site banner

There’s no psychopathology.

I’d like to start with a few disclaimers. This is not an anti-psychiatry post. This is also not the place to ask or receive advice about your mental health, or what nowadays is called “mental health”.

For some time now I’ve been feeling like I live in a different world than most people I know. It has come to a point where I have to face an awkward alternative: Either most people I know are wrong (including learned men and experts) or I am insane. As I don’t believe I have lost my sanity, and as I believe that I have very strong arguments to hold my ideas against all reasonable counterarguments; I think it’s about time I sit down and share my ideas more or less publicly. This is one of such ideas. What follows is the summary of my academic studies, my professional experience working in the field of mental health, my own personal thoughts, and the influence of several authors, chiefly Georges Canguilhem and Jacques Lacan.

The APA defines psychopathology as “the scientific study of mental disorders, including their theoretical underpinnings, etiology, progression, symptomatology, diagnosis, and treatment”. It is a jurisdiction of medicine, although that does not exclude other disciplines from delving into it as well. It is intrinsically linked to psychiatry, to the point where one cannot exist without the other. But psychiatry itself is a rather contradictory branch of medicine, because while every other specialization of medicine has built its own object of study by exploring a specific organ or function from the body, psychiatry exists only by virtue of that which it ignores. In its origins, psychiatry was born to deal with what has been classically classified as insanity, those people described by Descartes who believed they were made of glass or who fancied themselves to be pitches of water. These outlandish delusions have always caused turmoil in society because nobody really knows where they come from, what they mean and, most importantly, what to do with them. Insane people clearly need help but they do not want it, or what help they are willing to receive it’s impossible for other people to give. They break the law but they are not criminals, or at least they are bening. They behave like savages but are human beings and deserve to be treated as such.

Now enter the Enlightenment: Lady Reason triumphs all over the Western world, everything now has or will have a place and an explanation in the encyclopedia of universal knowledge. And what we understand we control. There are now a bunch of physicians who have little evidence but little doubt that these people are sick and that it is their task to heal them. And that they’ll try with all their available resources, but with little success. So while neurology developed from the study of the brain, cardiology from that of the heart and so on, psychiatry was born out of sheer embarrassment. It is the branch of medicine that studies mental disorders. However, being a part of modern scientific medicine, it cannot but assert that mental disorders can be explained by studying the body, the contradiction being that the day psychiatry discovers the bodily cause of mental disorders will be the day that it ceases to exist as a specialization of medicine, for said cause would fall under the jurisdiction of another specialization: If it’s in the brain then it would be neurology, if it’s in the genes it would be medical genetics, and if we were to discover a new organ in the body then a new specialization will be born to study it, leaving psychiatry in the past.

Therefore, psychiatry exists only because we do not know what mental disorders are. In fact, we don’t even know if the mind is real or not, much less whether it can get sick. What do we actually know then? We know that 1. there are people who need help, and 2. that there are means to help them. So it becomes a matter of administering a scarce resource. This is what psychopathology really is: It is not a science of mental pathology, it is the art of distributing psychiatric drugs and psychological treatments.

There used to be psychopathology. Classic psychiatrists wrote impressive treaties on the subject, with thousands of pages explaining in detail and classifying the behavior of their patients. The mountains really were in labour, alas, only a mouse was born: No progress was made regarding the causes, and most importantly the treatment of such behaviors. This last problem was drastically improved by the invention of psychopharmacology. Suddenly psychiatrists had a powerful tool to treat the symptoms of insanity, so even though they weren’t any close to understanding these symptoms, they changed their ideas on the subject to reflect the influence of psychiatric drugs. These influences can be accurately gauged by the changes on the DSM. The first DSMs included theories about the origin and nature of mental disorders, the last DSMs only mention the clinical symptoms necessary to prescribe a treatment. When a patient is diagnosed with depression the only relevant information that is learned is that said patient will start a treatment for depression.

So are mental disorders real? Of course they are. Whether they are mental or disorders, that’s another question. They are real because they are a set of behaviors that have been observed to occur together: Feelings of sadness, self-harming ideas or behaviors, inability to feel pleasure, these are all things that are real, observable, measurable, and treatable. But are these symptoms a mental problem? Are they a medical problem, or a problem at all? This is highly debatable, and in any case, not a solid foundation for a science.

If a person feels sad all the time, it is only natural for them to think that this life is not worth living. But the opposite is also true: If a person is convinced that there is nothing good in this world, then they will feel sad and hopeless all the time. So what comes first? Should we treat the sadness or the thoughts? And what if the person likes to feel sad, if they don’t want any help? Should we force them? And to make matters worse, it turns out that both psychiatric drugs and psychotherapy are effective*. And this is only to talk about those treatments that have empiric evidence to back them up and are approved by psychiatry, because, under the right circumstances, literally everything can be therapeutic: There’s horse therapy, art therapy, music therapy, dog therapy, video-game therapy, you name it.

There are some who believe in the demon deceptor, a person, or a group of people, who control our reality and make lies pass for truth, usually with malicious intent. These people believe that the pharmaceutical industry has created mental disorders only to sell drugs, and that psychologists and psychiatrists are their accomplices. For my part, I think it is overly optimistic to believe that someone has such a degree of control over the situation as to make it bend to their will. I believe that people are just confused, and with good reason, because being human is quite a bizarre experience. There are of course those who profit from the confusion of their fellow man, and prey on their ignorance. But even evil has its limits, and nobody can summon such perfect wickedness that no good may come of it. The truth is that for all the confusion that our idea of psychopathology entails, the treatment and the care for people with mental disorders has progressed a great deal in the last decades.

On the other hand there are the encyclopedists, who will argue that the fact that we haven’t discovered the bodily sources of mental disorders does not mean that we won’t succeed in the future. We have certainly made discoveries in this direction: Not only do we know now that it is impossible to be sad or mad without a brain, but we also know what specific brain part or substance is required. But even after all the advances in neurology, still no neurologic exam is indicated for the diagnoses of mental disorders, and for good reason. Because ultimately, what decides if someone has a mental disorder or not are arbitrary criteria. The fact that homosexuality is no longer a mental illness is only because of the fact that society has shifted its values towards the acceptance of diverse sexual orientations, were it not for that fact we would speak about the “homosexual brain” just as we know speak about “the depressed brain”. We could also speak about “the carpenter brain” or “the the writer’s brain”, and treat all of those conditions as illnesses.

In conclusion, I believe that contemporary psychopathology is a case of finding a hammer and suddenly realizing we are surrounded by nails. If something can be treated as an illness it will be treated as an illness, because that is l’esprit de l’époque. Classifying something as an illness, assigning it a part of the brain, and prescribing it a drug as treatment makes it real and important, so politicians, scientists, and the general public are aware of its existence and direct resources its way. This is why everyday we “discover” that there are more things linked to mental health: Poor housing, poor nourishment, the weather, sexual orientation, racial discrimination, political ideologies… and as there is no psychopathology there’s no limit to psychic pathologies. There’s a drug for everything or a therapy for everything. It’s no coincidence that we now have the most effective treatments in history and the highest rate of accessibility to mental health services ever, but the rates of mental disorders are soaring well. And despite all the advances in psychotherapy and psychopharmacology, no breakthroughs have been made in psychopathology.

I’m convinced that in the future people will look at our ideas on psychopathology as we now look at humorism.

Sources:

APA Definition of Psychopathology: https://dictionary.apa.org/psychopathology

*Psychotherapy just as effective as pharmacotherapy: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5244449/

3
Jump in the discussion.

No email address required.

People have claimed to have developed reactionless thrusters before. A test I've heard proposed is to hang the thruster and its power source from a pendulum, inside a sealed plastic bag, and then show it holding the pendulum off-center. In a similar vein, here's some proposals for similar tests of read/write access to the human mind: a working lie detector, love potion, mind-reader, or mind-control device would all be obvious demonstrations of the basic capability. Do you believe any of these exist, or that they will exist in, say, the next few years? If not, why not?

A tailor jumping off the Eiffel tower with a large apparatus of mostly linen did not prevent the invention of working parachutes. The dozens of people jumping off cliffs to their deaths throughout the ages did not prevent heavier than air flight.

There are reasonably robust theoretical reasons to suppose that reactionless thrusters do not work. I wish to see the equivalent for BCIs.

It certainly doesn't seem to be. I'm all for it, but this is reading output, not even input/output, and certainly not read/write.

https://en.wikipedia.org/wiki/Retinal_implant

Foerster was the first to discover that electrical stimulation of the occipital cortex could be used to create visual percepts, phosphenes.[1] The first application of an implantable stimulator for vision restoration was developed by Drs. Brindley and Lewin in 1968.[2] This experiment demonstrated the viability of creating visual percepts using direct electrical stimulation, and it motivated the development of several other implantable devices for stimulation of the visual pathway, including retinal implants

(The eyes are also an extension of the brain from an anatomical and developmental standpoint, but you can write imagery to it directly, as the excerpt showing the effects of stimulation to the occipital cortex shows)

We can reliably produce different sensations and even cause muscular movements. The tech isn't at the point I can make you see HD pictures.

You can damage my brain right now, or possibly jam it. You can't write on it in any meaningful fashion, as far as I know. Again, if you could, that would necessarily imply the present existence of mind reading and mind control, correct?

Now, I can always adjust the voltage on the electrode, the indemnity documents are purely an insurance matter and not because I'm not licensed for neurosurgery. I can reliably induce plenty of somatic sensations in you and show you some really sweet colors, and once I get to the temporal lobe, I can promise meeting God Himself or your money back (and you don't even have to die, temporal lobe epilepsy or stimulation causes religious ecstasy).

We can read minds. See video linked. It literally picks up on his will to move his paralyzed arms and converts that to equivalent mouse movements. We can control minds. Just very crudely. The dude in the video says he intends to cosplay as Professor X, which is hilarious, and also not far from reality. If he can move a mouse cursor, he can move a robot, with his mind, at a distance. This has been done with Neuralink in monkeys, and with other BCIs, also in monkeys.

Mind reading and mind control exist. It's not a psychic phenomenon, it uses Bluetooth.

Marx? Freud? BF Skinner? Watson? As for affiliation with them, do you think the "God of the Gaps" is a reasonable criticism of Christianity? If so, is it just Christians who shouldn't collectively retreat to unfalsifiability, or are such collective retreats in the face of contrary evidence generally bad whoever is doing them?

Umm.. I have said some very uncomplimentary things about Marx and Freud. I believe I called the latter Fraud in a comment as recently as a day ago.

Skinner? I am neutral on him. The Skinner Box is an interesting idea, that's about all I can say from memory.

Watson? Presumably buddy of Crick? What did he do wrong? I have no idea.

I have seen plenty of Christians here happily engage in GOTG reasoning. You don't need to go back 50 years or more to find them. Yes, hiding in the dark and dank corners where you can just barely squeeze your eyes closed against the light of empirical inquiry is a shameful display.

But what's your point? I invite you to show me what I have in common with Marx, Freud, Watson and Crick. I am bemused that there exists a natural category the five of us share. I certainly denounce the former two much harder than I can recall any practising Christian here fighting against GOTG. If so, they have my thanks, I prefer an honest admission that they have an unshakable faith that I don't have to waste time debating rather than having to wait for the inexorable march of progress to squeeze them out of the gratings.

Could I get a cite on this? I would like to see some actual captures from dreams or the minds eye, because I'm pretty sure such things don't exist in the sense I understand the terms. I'm interested in being proved wrong, though.

This A.I. Used Brain Scans to Recreate Images People Saw

If. As in, in the future. As in, not in the present. You recognize that we not only can't emulate a human being, but we aren't anywhere close to being able to, right? That the capability you are reasoning from is, at the moment, entirely fictional?

The line between sufficiently hard scifi and a white paper is blurry indeed.

This is a terrible argument. You are likely using an electronic device that was "fictional" when first imagined to make it.

In the absence of AGI, I think it would take anywhere from 20-40 years to make a high fidelity emulation of a human brain. The bottlenecks are both available compute (the requirements to run a human brain on a computer are large, and the estimates vary OOMs). We also need better scanning tools, right now they're best suited for tiny slices at a time, and you can't do it while the subject is alive. Thankfully the latter is not a strict requirement, and shortcuts in compute and better algorithms probably exist. The record as it stands has long exceeded drosophila and roundworms, and the current SOTA is either an entire rat brain or 1% of a human brain.

If you disagree, please make it clear you're exceedingly confident in what I deem to be a very insane proposal that we will not have OOMs more raw compute in a few decades*, or that scanning techniques will not improve even modestly, given that new advances come out every year or two.

These are all difficult tasks. Hence the long time without AGI helping us. But nothing in my knowledge of physics, chemistry, biology or engineering rules it out, and the EU has enough confidence that it's possible that they spent a billion euros on the Whole Brain Emulation Project and are now working on its successor, EBRAINS. It's at about the same stage as the Human Genome Project was, and look where we are today. (Tbf, they did a terrible job at it. But it's a hard job nonetheless.)

*Short of civilizational collapse and mass extinction.

All direct evidence available to us shows that the human self has free will: we experience its operation every minute of every day, and can examine that operation in intimate detail. All engineering we do on humans operates on the assumption that free will exists, from education to law to navigating interpersonal relationships. Determinism makes no predictions that can currently be tested. Determinism's previous testable predictions have all been falsified. No engineering we currently do on humans operates according to deterministic principles. All these are facts, to the extent that "facts" can be said to exist.

The fact that all of the above can be so easily ignored can teach one important things about the operation of human reason, and particularly the prime role that free will takes in that operation. You can't be made to draw a conclusion by presented evidence, because the individual will has veto on what is even considered evidence. All beliefs are chosen by distinct acts of the will.

Please, not this topic again, I think we've been over this at least twice or thrice and I don't think we've made any progress. Certainly this is a starting point that would require rehashing what was a fruitless debate.

There are reasonably robust theoretical reasons to suppose that reactionless thrusters do not work.

There are theoretical reasons to believe that Brain emulation won't work either. Whether they qualify as "reasonably robust" is a question beyond my purview, but your answers so far lean me more toward thinking so.

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

...As I understand it, this is a paper where some guys took your proposed approach and applied it to a microprocessor, to see if it would work on a system where there were no appreciable unknowns. With a perfect map of the microprocessor and the general tools we have for investigating the brain, the deeper structure of the chip's processes were completely inaccessible to them, even in principle.

We can reliably produce different sensations and even cause muscular movements. The tech isn't at the point I can make you see HD pictures. The tech isn't at the point I can make you see HD pictures.

Even if it were, my argument would be the same: showing me a picture by passing data through the visual wiring is not writing to the brain. Giving me a memory of seeing the picture would be writing to the brain. Ditto for the sensations and muscular movements. I can make you feel sensations and make your muscles move without poking the brain at all.

Once I get to the temporal lobe, I can promise meeting God Himself or your money back (and you don't even have to die, temporal lobe epilepsy or stimulation causes religious ecstasy).

We can already induce the sensation of religious ecstasy through a variety of analogue means. Why would doing it with a needle be significant? Can you make an atheist into a Christian, or a Christian into an atheist? Can you make someone love a specific person, or hate them, in a durable, continuous way?

Mind reading and mind control exist. It's not a psychic phenomenon, it uses Bluetooth.

"Mind reading", as in accessing the self, tapping the internal monologue, viewing memories. "Mind control", as in controlling the self, editing memories, changing how someone thinks.

This A.I. Used Brain Scans to Recreate Images People Saw

If I'm understanding the article's description correctly, they are reading sensory intake data live. That is indeed a very neat development, and not something I would have expected, but it still appears to be in the general input/output regime rather than the read/write regime.

This is a terrible argument. You are likely using an electronic device that was "fictional" when first imagined to make it.

When the guy tried to parachute off the Eiffel tower, he did so because he'd tested the idea with models first and had some direct evidence of the thing he was attempting to harness. My understanding is that we do not have anything like that for the self, the mind, the me inside the brain. We can access the data going in and out of the brain, but to my knowledge we have no insight at all on the actual computation, its operation or its mechanisms. We have matter and energy patterns, and we presume that these must add up to consciousness not because we have any insight into the details of the mechanism, but because Materialism requires that all other theories be discarded. But even if this is true, that is still not evidence that the patterns and structures are tractable under the conditions of the material world, for the same reason that it is probably not possible, even in principle, to accurately predict the weather in St. Petersburg a year from now. In my experience, arguments to the contrary amount to saying that we don't know what the obstacles might be, so there probably aren't any. That is not an example of reasoning from evidence.

Watson?

John Watson, the father of Behaviouralism. His thesis was admirably succicnt:

“Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select — doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”

Like you, he had "evidence" that this was possible: psychological experiments demonstrating conditioning, habit formation, etc. He naïvely extrapolated that evidence well outside its domain, and ignored all evidence to the contrary, and so in time his core claims were thoroughly falsified.

But what's your point? I invite you to show me what I have in common with Marx, Freud, Watson and Crick.

You all share the conviction that the human mind is arbitrarily tractable, controllable, malleable, that Selves can be engineered with no regard to their own will, and despite all evidence to the contrary. Marx thought it could be done through engineering social conditions, Freud through manipulation of the psyche, Watson and Skinner through fine-grained manipulation of the environment, and you think it will be done through nanotech and emulation. A lot of people have believed this idea, especially over the last century or two, based on no testable evidence at all, and a lot of very serious efforts have been made to actually do it, all of which have failed. None of those failures have done a thing to shake the confidence of those who hold to this idea, because the idea is not derived from evidence, but rather from axioms.

I think you're wrong. But I care a lot less about whether you're wrong, than I do about pointing out the mechanics of how beliefs are formed. It should in principle be possible to get you to recognize the difference between "I know how this works because I've personally worked with it" and "I know how this works, because I have a theory I haven't managed to poke holes in yet". But the most recent version of this conversation I had resulted in my opposite claiming both that Determinism was demonstrated by evidence, and that it was impossible even in theory for evidence against Determinism to exist, because it was true by definition. So who the fuck knows any more?

The record as it stands has long exceeded drosophila and roundworms, and the current SOTA is either an entire rat brain or 1% of a human brain.

Are you are claiming that scientists can, right now, emulate an entire, active rat brain? That seems pretty implausible to me, but I stand to be corrected. I'm not confident that "1% of a human brain" is even a coherent statement, that the phrase means anything at all. 1% of what, measured how?

If you disagree, please make it clear you're exceedingly confident in what I deem to be a very insane proposal that we will not have OOMs more raw compute in a few decades*, or that scanning techniques will not improve even modestly, given that new advances come out every year or two.

No, I think you're straightforwardly wrong about what is possible right now, and especially about what it shows. I don't think scientists can "emulate a rat brain", meaning create a digital simulacrum of a rat that allows them read/write access to the equivalent of a live rat brain. I certainly do not believe that scientists can emulate "1% of a human brain", under any reasonable interpretations of those words. My argument isn't that compute won't improve, it's that the mind is probably intractable, and that certainly no evidence of tractability is currently available. I have not looked into either the WBEP or EBRAINS, but I'm pretty confident they don't do anything approaching actual emulation of a human mind.

But nothing in my knowledge of physics, chemistry, biology or engineering rules it out, and the EU has enough confidence that it's possible that they spent a billion euros on the Whole Brain Emulation Project and are now working on its successor, EBRAINS.

Behavioralism probably got a whole lot more than a billion, all told. Marxism got trillions. These ideas ran the world for more than a century, and arguably still run the world, based on zero actual validity or utility.

I was a big fan of transhumanism, once upon a time. I was very big into the idea of brain emulation. I too crave the strength and certainty of steel, much to my wife's chagrin. I used to believe that the brain was obviously a computer, and science would equally obviously reduce its structures and contents to engineering problems in time. But looking back, I think it's pretty clear that this belief came from inference, not evidence, and from some pretty shaky inferences too. As you put it, "nothing in my knowledge of physics, chemistry, biology or engineering rules it out", and I wanted to believe it, and it fit neatly with other things I wanted to believe, so I discarded the contrary evidence and ran with it. That's what people do.

"nothing in my knowledge of physics, chemistry, biology or engineering rules out" my belief in God. Of course, my understanding of God has been shaped by my understanding of all of these, so I effortlessly avoid pitfalls that observably trapped some of my predecessors. In the same way, your belief in the nature of the mind is shaped by your understanding of all these, and so you effortlessly avoid the traps that caught many of your preceding mind-engineer transhumanists. The fact that I don't attempt to argue for young earth creationism doesn't mean I actually have any better an understanding of the reality or fictitiousness of God than those who came before me. In the same way, the fact that you don't think the brain can be engineered by psychoanalysis or socialist revolution doesn't mean you understand the mind better than Watson or Marx or Freud; we didn't derive our understandings from first principles, but from learning from the painful experience of others. Nothing about that indicates any significant ability to get novel answers right ourselves.

Please, please, recognize the difference between "I know this is so" and "I don't know why this shouldn't be so". Both are useful; I argue that both are entirely necessary. But it pays to be clear about which one you're using in a given situation, and to not mix the two up.

What is going to happen when we can simulate all of this stuff in a few years? Are you going to admit defeat or are you just going to come up with a new laundry list of reasons why a fully simulated human brain explains nothing? It is interesting that you say you used to believe as we do. What caused you to abandon materialism for spiritualism? https://youtube.com/watch?v=7gqvFgo-sS0

What is going to happen when we can simulate all of this stuff in a few years? Are you going to admit defeat or are you just going to come up with a new laundry list of reasons why a fully simulated human brain explains nothing?

If someone can actually demonstrate read/write on a human mind, I'll absolutely concede that read/write on a human mind has been achieved. Why would I do otherwise? My entire argument, here and previously with you, is that direct evidence should be weighted higher than axiomatic inference. Further, it's difficult to get a better example of madness than "I believe X because of evidence, and also it is impossible for evidence of !X to exist, even if it by all appearances does in fact exist."

What caused you to abandon materialism for spiritualism?

At no point in our previous debate did I advocate for spiritualism in any form. I am entirely willing to concede that Materialism might be entirely correct, and that belief in it is as rational as any other axiom one chooses. I simply note that it appears to be unprovable, since we know for a fact that significant parts of it appear to exist where we cannot access them, even in principle. I further note that the standard arguments lean heavily on isolated demands for rigor, as I believe our last exchange demonstrated quite well.

I stopped being a materialist because being a materialist did not deliver results. I have not seen a way in which abandoning Materialism has compromised my reason or my ability to engage with the material world; it did not force me to believe in a flat earth or in faith healers or to doubt empirical reality in any way. I think the change has removed a number of blind-spots to my reason that I previously suffered from, and it helps me better understand why so much of "rationality" is so self-evidently irrational, why those who claim to claim to believe only in what they can see and touch and quantify nevertheless adopt absolute belief in tissue-thin fictions; the history of the field of psychology is my go-to example, but there are plenty of others.

In any case, if I am mad, it should be easy to refute my arguments, no?

I believe we reached the terminal discussion end point on this topic during the last go round, it is just so damn applicable to every subject that it keeps coming up. I was just surprised to learn you once thought as I do. "we're all mad here" after all.

I was just surprised to learn you once thought as I do. "we're all mad here" after all.

On that note, and to provide a little more detail, I think it's pretty clear that ~all people have an easier time believing things they want to be true, than things they want to be false. Above, there's the idea brought up that I'd paraphrase as "I don't know it's impossible, so I'm going to presume it's possible", and this is usually a standard applied to something people want to be true.

I think I wanted brain emulation to be true, because I consumed a lot of sci-fi that presented it as super cool, and because I was young and had a young person's perspective on death as being some super-scary thing to be avoided at all costs. I wanted more cool things and I didn't want to die, so I wanted brain emulation to be real, and there was no conclusive evidence it wasn't possible, so I assumed it was.

Because my assessment of the facts was driven by preferences in this way, the easiest route to changing my mind was to attack the preference, not the evidence that preference was driving the assessment of. Spending a decade wanting to die removed a lot of the "death is scary" motive. C.S. Lewis's Abolition of Man provided a heaping helping of the less pleasant implications of such technology, and also provides the historical counter-evidence in retrospect, and entirely from within the materialistic frame as I recall. The short story Lena provides a shorter and more visceral illumination of the downsides, without even the second-hand association to anything beyond Materialism.

I think mortality is a pretty good deal under present conditions, and if I am wrong about God I consider annihilation a reasonable consolation prize. I'll be convinced that brain emulation is possible when I see actual evidence of brain emulation being possible. If it turns out to be possible, I will do everything in my power to avoid being emulated. If determinism can be proved, it's hard to see how that would improve the world in any way.

This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data.

Uh.. The paper you linked is at least 50% of an academic shitpost or meme.

I'm not kidding. I've read it before.

It has legitimate arguments about neuroscience and the validity of their analysis tools, but those tools were not designed to analyze Von Neumann architectures and transistor based circuits.

I'm not a neuroscientist, but the principles of neuroscience are both not designed for the task these guys did (and they did it at least partially as a joke) and have proven results elsewhere. Further, we have better options for circuit analysis in silicon and don't have them in neurology, so the paper correctly points out that they're flawed, we just don't have better alternatives but are working on them.

We understand a great deal about many individual pathways in the brain, such as the optic pathways, and there are hundreds of different pathways we know a great about down to the neuronal level while being very far from being able to tell what any arbitrary neuron does.

This is because neurons are complex. It takes 1000 ML neurons to simulate a single biological neuron, but guess what, it's been done.

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

When the guy tried to parachute off the Eiffel tower, he did so because he'd tested the idea with models first and had some direct evidence of the thing he was attempting to harness

He didn't test it enough. His friends literally begged him to try his apparatus with a dummy before he used it himself, but he was too proud and confident in his invention.

https://en.wikipedia.org/wiki/Franz_Reichelt#:~:text=Franz%20Karl%20Reichelt%20

From his arrival at the tower, however, Reichelt made it clear that he intended to jump himself. According to a later interview with one of the friends who accompanied him up the tower, this was a surprise to everybody, as Reichelt had concealed his intention until the last moment.[9] His friends tried to persuade him to use dummies in the experiment, assuring him that he would have other opportunities to make the jump himself. When this failed to make an impression on him, they pointed to the strength of the wind and said he should call off the test on safety grounds, or at least delay until the wind dropped. They were unable to shake his resolve;[7] seemingly undeterred by the failure of his previous tests, he told journalists from Le Petit Journal that he was totally convinced that his apparatus would work, and work well.

Trust me that I have the common sense not to do that. I meet the low bar of not being insane.

You all share the conviction that the human mind is arbitrarily tractable, controllable, malleable, that Selves can be engineered with no regard to their own will, and despite all evidence to the contrary.

There are many ways to control the human mind.

The only natural category these lot have is that they were completely wrong, and it's a bit rich to make that assertion about me when I'm discussing what I have clearly labeled as an extremely difficult engineering challenge. What else do you think that my belief that it would take us several decades to get there (in the absence of AI) means?

No, I think you're straightforwardly wrong about what is possible right now, and especially about what it shows. I don't think scientists can "emulate a rat brain", meaning create a digital simulacrum of a rat that allows them read/write access to the equivalent of a live rat brain. I certainly do not believe that scientists can emulate "1% of a human brain", under any reasonable interpretations of those words. My argument isn't that compute won't improve, it's that the mind is probably intractable, and that certainly no evidence of tractability is currently available. I have not looked into either the WBEP or EBRAINS, but I'm pretty confident they don't do anything approaching actual emulation of a human mind.

I never claimed that they have a virtual rat running around either. In this case, they managed to fully analyze the connectome of a rat, mapping all the neurons and their interconnections, but it takes much more than that to get an emulation running.

You need the actual weights of the neurons (in the ML sense), and for that you need optogenetics to study it, presumably in a live specimen, or you need to destructively scan the tissue with other techniques (if they were live when you started, they won't stay that way for long, hence why preserved tissue samples are used, including in the EU program. Besides, they only claim to have emulated 1% of the human brain, as a pilot program and technology incubator, your guess is as good as mine what running 1% of a brain does).

But looking back, I think it's pretty clear that this belief came from inference, not evidence, and from some pretty shaky inferences too. As you put it, "nothing in my knowledge of physics, chemistry, biology or engineering rules it out", and I wanted to believe it, and it fit neatly with other things I wanted to believe, so I discarded the contrary evidence and ran with it. That's what people do.

Do you think I peg my expected figure for how long it might take (counterfactual blah blah) because I am "discarding evidence"? No. It's because I have repeatedly acknowledged that it is an enormously difficult task.

Hell, if I was in charge of funding, I wouldn't put too much money into human brain emulation either, because AGI makes the wait-curves too steep. Anything we mere dumb humans do to painstakingly achieve that is likely going to be a waste of time and resources as opposed to a far more intelligent entity working on it, and we're building those even if they're decidedly inhuman.

Please, please, recognize the difference between "I know this is so" and "I don't know why this shouldn't be so". Both are useful; I argue that both are entirely necessary. But it pays to be clear about which one you're using in a given situation, and to not mix the two up.

Once again, if it's not contradicted by physics, then it's an engineering problem. We can emulate neurons, and to within the limits of measurement and innate noise, as the link to quanta shows.

It is a hard problem. It is still a problem that I expect will be solved eventually, and am reasonably confident I'll be alive to see it.