The United States was taking "soft" diplomatic action against Japan before they attacked Pearl, both in terms of an oil embargo and in terms of sending mercenaries and weapons (the "Flying Tigers") to China to fight against them. I think that by 1941 they
- Had been training to think of the US as their main strategic opponent for some time
- Saw clear signs of US hostility
- Knew that the United States had a major naval expansion underway (due to the Naval Act of 1938)
- Understood that whoever punched first had a clear mover's advantage
I am not an expert into Japanese thought, so perhaps there was much more than this. But that seems sufficient to me, if that makes sense. [Edit to add: the Flying Tigers arrived in China before Pearl Harbor but did not see combat until after. It's unclear to me how secret this was/if this played any part in Japan's thinking, but the oil embargo was, of course, no secret.]
With any luck(??) we'll get Archipelago In Space, which could be very interesting on a lot of different levels. IMHO the US Constitution is very good ("working exactly as intended") but it was a some what unwieldy compromise because it had to accommodate certain geopolitical realities. That may be less true for SPACE COLONIES than any other civilization before (although I am not sure I would place money on it).
Setting aside the ethical problems of "simply eliminating blacks" I have a few thoughts on this:
Firstly, I don't live in a town with a lot of people of that persuasion and I still had to pick trash off of my lawn this morning, presumably because someone somewhere (probably not a black person!) littered and it blew its way over to me. On the flip side, if you go to certain places, particularly in the Deep American South, you'll find a lot of black people who, as far as I can tell, care a lot about civility and dignity. (You can also find plenty of things to complain about if you're so inclined).
Secondly, about 15 years ago, I went to Rome (the one in Italy) and was not super impressed by the sorts of QOL OP was discussing - there was a lot of smoking and, IIRC, a fair amount of graffiti, although it's been a while. (It also just seemed civilizationally dead but that's another story). Now, something else I noticed in Rome is that Italians are not black (unless you're going with the original Ben Franklin idea of black wherein anyone who isn't a German, Briton or Swede is at least a bit swarthy).
In short: no, I don't think black people are the malignant heart of all problems with civic society.
Hmm, a lot here.
I think the United States was pretty close to being destined to ride out World War Two unscathed as long as nobody hostile developed a nuclear weapon. I think that's pretty much the only way CONUS gets more than a scratch. It's just very hard to do damage from across the Pacific or Atlantic oceans, and we only managed it on the one hand by taking a bunch of islands within striking range of Japan (and there aren't many of these on the Eastern American seaboard) and on the other hand by having England conveniently right there.
(This isn't the same argument as "the United States was destined to win the war in the Pacific).
Similarly, I'm not sure anything about Japan's technology would have saved it from being stuck between the United States (with 2x its population) and the Soviet Union (with nearly 3x its population).
But I do agree that a slightly different culture would have kept it from getting bombed out of World War Two and made it more competitive in the postwar era - a Japan that doesn't lose World War Two is at a minimum a major regional power.
I also think, FWIW, you probably don't get US culture without US geography. I think crossing the Atlantic and Pacific had a strong filtering effect on Americans that persists to this day.
(Incidentally there is imho a huge underrated and interesting question about long-term space colonization, as imho space colonies are likely to be insanely productive due to founder effects, but may also be prone to regimented thinking.)
Hmm, let me give it another go: I think parts of the United States do have those things. From what I can tell, those places typically have older and wealthier populations that have lived there for a long time and that have a sort of set culture. Larger American cities aren't so much that way (in fact they are often celebrated at not being that way!) Even where I am (American Southwest) the restaurants don't have security guards [that I've noticed] and you can get great service (although it's a little hit or miss) even in the poorer/more rural places.
Japan is an aging culture, I don't think it's surprising that it's more considerate even all things being equal (which they aren't, Japan was already a culture known for having a code of polite behavior and America has always been known as being a straightforward place, except for the South.) I think (although I could be wrong) that cultural churn is destructive to polite mores. You can see how America, and particularly American cities, are full of cultural churn:
- Younger population
- Multicultural, with plenty of immigration
- Wealthy, with plenty of internal migration and climbing-the-ladder
I think a large part of politeness is having to live with the consequences of your actions. Even in a large city, one without a lot of "churn" and upwards mobility means that neighbors know each other and live next to each other for years or decades. But America is wealthy, and people are always moving in, moving out, and mostly moving up and away, and so there's not as much incentive to be civil or polite or not to litter. (Although maybe stuff like that is literally just a question of whether or not you catch and punish the X% of the population that litters, I dunno.)
There's the saying that everyone in America is a temporary embarrassed millionaire, and I think that attitude makes more millionaires, and fewer polite waiters and careful ramen chefs. In other words, it's hard for a constantly moving culture to settle around a distinct set of mores. (But I've never been to Japan, so I'm on thin ice making comparisons.)
People in the United States have more wealth in other measurable things, though (cars, firearms, computers, square footage of living space, etc.)
I think that the United States is a very big and very varied society, and ultimately on the whole it's not as high trust as e.g. Japan, but it is higher variance. And higher variance arguably means more wealth, since innovations that improve QOL and increase wealth are unusual.
The other thing, though, is that the United States basically got unrestricted access to an entire continent and rode out essentially unscathed a very formative moment in industrial history that saw much of the rest of the world absolutely obliterated (including Japan) and so it got a significant head start in a lot of ways that matter.
That was pretty standard stuff, especially in Judea. There were of course doubters and pragmatic sorts but the cultural milieu was far more accepting of this kind of thing.
And to the extent that this was true but not longer is, past Christians may have taken as a sign that Christianity was correct: as I've mentioned on here before, early Christian apologists made use of the decline in paranormal phenomena as evidence that Christ's coming at upended the old order of things.
But I'm not so sure things are that different from 1st century Judea. In Christ's time, generals consulted the entrails of animals; roughly two thousand years later, the generals consulted psychics. Divination and astrology remain popular, rogue billionaires fund research into the question of life after death, insiders from shadowy oracular government agencies tell Congress that UFOs could be coming from other dimensions, the New York Times runs articles about demon exorcism. I could see future generations looking back on 21st century America as a heyday of superstition and belief in the paranormal.
It might be true that OP's statement "claims that may have seemed more plausible centuries ago but are no longer so persuasive" is true in the literal sense that, as you say, the "cultural milieu" may be more skeptical of them now. But I'm not sure the cultural milieu is the best way of evaluating the truth of a claim. And even if it was, it seems fairly constant to me that the majority of people believe in the paranormal or spiritual, while a minority of people (often well-educated) express skepticism of it, with varying levels of outspokenness.
Regardless of when the line was written, I think it's very reasonable to say that the Son of Man did not come in his kingdom. Surely we would've noticed?
I've got a note on the context above you may have missed. That being said – two of my favorite passages of Scripture (and quite topical to Christmas, for they roughly bookend the Gospel accounts of Christ's life) touch on this question:
Now when Jesus was born in Bethlehem of Judaea in the days of Herod the king, behold, there came wise men from the east to Jerusalem, Saying, Where is he that is born King of the Jews? for we have seen his star in the east, and are come to worship him. [...] And when they were come into the house, they saw the young child with Mary his mother, and fell down, and worshipped him: and when they had opened their treasures, they presented unto him gifts; gold, and frankincense and myrrh.
The above is from Matthew 2. Below, from John 18 - 19:
Then Pilate entered into the judgment hall again, and called Jesus, and said unto him, Art thou the King of the Jews? Jesus answered him, Sayest thou this thing of thyself, or did others tell it thee of me? Pilate answered, Am I a Jew? Thine own nation and the chief priests have delivered thee unto me: what hast thou done? Jesus answered, My kingdom is not of this world: if my kingdom were of this world, then would my servants fight, that I should not be delivered to the Jews: but now is my kingdom not from hence. Pilate therefore said unto him, Art thou a king then? Jesus answered, Thou sayest that I am a king. To this end was I born, and for this cause came I into the world, that I should bear witness unto the truth. Every one that is of the truth heareth my voice. Pilate saith unto him, What is truth?
[...]
And Pilate wrote a title, and put it on the cross. And the writing was Jesus Of Nazareth The King Of The Jews. This title then read many of the Jews: for the place where Jesus was crucified was nigh to the city: and it was written in Hebrew, and Greek, and Latin. Then said the chief priests of the Jews to Pilate, Write not, The King of the Jews; but that he said, I am King of the Jews. Pilate answered, What I have written I have written.
Presumably they felt pretty vindicated.
From what I understand with Genesis specifically, it's is speculative at best to suggest that it was understood as a scientific account of creation at the time, so it seems unlikely it was intended to be received in the way a 21st Century American would receive it. There's some pretty interesting textual evidence that at least part of the point of the creation story was about exploding certain other "hostile" creation myths. (Given the time of year, it's worth remembering Genesis 3:15, which is some extremely advanced foreshadowing even if you take the story as a parable!)
BUT
I have played Civilization V, which has an option to simulate the age of the Earth (I think at 3, 4, and 5 million years, don't quote me on that). And anyone inside my Civilization game would think the Earth was millions of years old and had undergone millions of years of evolution and such, which of course is patently untrue – I created the game (ex nihilo if you will) one second ago on my computer, with barbarians springing forth as if out of Jove's forehead and such.
My point here isn't to argue about Genesis (although I enjoy discussing it and would be happy to, in a friendly way!), just to point out that if you believe an omnipotent God is a viable hypothesis the range of possibilities about the nature of reality is broader even than the range of possibilities under e.g. the simulation hypothesis.
I don't know why any of those were more plausible centuries ago – people 1723, 23, and 2023 BC were all very much aware that e.g. people did not come back from the dead, and Christ's proclamation that He was one with God was considered so outrageous at the time that it nearly led to His being stoned according to the New Testament text. We haven't made any revolutionary discoveries in science over the past few thousand years that have made those sorts of things seem less plausible. (If anything, rather the opposite – for instance, although a virgin birth in the 1st Century remains miraculous, one happening in 2023 is merely an oddity. This line of thinking inevitably concludes in things like Ridley Scott trying to incorporate Jesus into the Alien canon.)
Or the 'Truly I tell you, some who are standing here will not taste death before they see the Son of Man coming in his kingdom' part.
If you think this is an admission against evidence, it follows that you think that Matthew was written in the 1st century, while eyewitnesses were still alive (somebody writing Matthew in, say, the second century would be less likely to include this if it was obviously untrue) which of course makes it more likely that Matthew is an accurate account, not less.
HOWEVER (although this puts me in mind of John's visions in Revelation, where he does see a vision of the the Son of Man coming to rule the Earth) what's going on here is likely significantly less interesting than either an admission against evidence or a reference to the last and perhaps most controversial book in the New Testament canon: Matthew is setting up what happens immediately in the very next verse, when Jesus is transfigured before some of the people present in the previous chapter. (Chapter divisions were not present in the original text, so this is arguably a case where they confuse more than clarify).
The strain of Christian thought holding that at least parts of the Old Testament were allegorical is almost as old as Christian thought itself, so while it might be difficult on some parts of Christian thought I doubt it has as much pull either way to the degree that New Testament findings do. Of course, even if you're committed to interpreting the majority of the Old Testament as a purely historical record, the older something is, the fewer traces it will leave, so it's easy to dismiss archeological conclusions you don't like (especially if they are based on a lack of findings).
From what I can tell, though, at least some things (such as the historicity of David) have also followed the trend discussed above, of being more supported as more archeological evidence becomes available, although of course there's a debate over how to interpret the evidence.
claims that may have seemed more plausible centuries ago but are no longer so persuasive.
I'm not sure this is really true – centuries ago we didn't have the archeological evidence we do today, so it was a lot easier to dismiss the New Testament record as something closer to a complete fabrication. The Pilate stone, for instance, wasn't discovered until 1961, and Papyrus 45 wasn't publicly known until 1933.
And what's known about the early New Testament isn't a closed door, either, there are new discoveries being made and research being done. For instance, there's been some work (which I sadly don't know as much about as I would like) has been done in the early 2000s that apparently shows the name-frequency use in the Gospels matches the name-frequency use in surviving archeological records from 1st century Palestine (which is very unlikely if the Gospels were not, at a minimum, based on solid oral traditions that originated in 1st century Palestine).
In short, there's a reason that the theory that Jesus was myth entirely and not a historical figure took off in the Enlightenment but is outside of mainstream historical thought today: we have better reasons to believe in the historicity of the Gospel accounts in 2023 than we did in 1723.
To be fair, the sorts of people who make this evolutionary argument will typically point out that the Old Testament is not written down in the order in which it was composed (for instance, Genesis 2 is usually thought to be significantly older than Genesis 1), so we have to do a bit more work to determine which texts came first chronologically, and then discern the evolution that way.
Yes, I agree – that's why I focused on the Song of the Sea and the Song of Moses, since they're supposed to be composed early, as I understand it. From what I understand of mainstream Scriptural textual criticism, I'm a bit skeptical of some of the approaches you [edit:] textual critics employ (for the reasons you lay out), but I think it's interesting to make arguments with even significant concessions. Any other candidates of early Old Testament texts that come to mind for you?
Hence every kind of magic was destroyed, and every bond of wickedness disappeared; ignorance was removed, and the old kingdom abolished, God Himself being manifested in human form for the renewal of eternal life.
Beautiful. Merry Christmas!
Yes, interesting – I think I agree with your distinctions, although I wonder if the Romans thought of early Christians as "Stepford nice," too.
Worth noting that some of the New Testament gets fairly granular as to how things like charity was administered and what measures should be taken to avoid dependency, although I am not sure that necessarily sheds much light on what was always happening a century or two later.
This is very interesting. Really sounds like the closest parallel today (barring maybe the attractiveness to women and the down-on-their-luck) would be Mormons, and for most of the same reasons. Or do you think that's a bad comparison?
As you read between the lines of the Old Testament, it's possible to trace a gradual evolution from what was basically a standard polytheistic religion following the ancient Semitic pattern
This definitely isn't true narratively (in the sense that e.g. Genesis clearly sets out God as the Creator God) but I don't think this is true textually, either, at least in the sense that the older parts of the Old Testament are more polytheistic and the newer parts of the Old Testament are more monotheistic. Wikipedia, which I assume is probably a good summation of scholarly consensus, lists the Song of the Sea as possibly the oldest part of the Old Testament. And the Song of the Sea has a fairly standard monotheistic (or, if you prefer, henotheistic) line:
Who among the gods is like you, Lord? Who is like you— majestic in holiness, awesome in glory, working wonders?
The Song of Moses (again, one of the four oldest passages as per Wikipedia), has even stronger language, identifying other "new" gods worshipped by the children of Israel as demons or devils, and differentiating God from the gods:
See now that I, even I, am he, and there is no god with me: I kill, and I make alive; I wound, and I heal: neither is there any that can deliver out of my hand.
So it seems fairly clear that the earliest written parts of the Old Testament were already making a distinction between God and gods qualitatively, suggesting that the other gods were in some sense false. (Now, obviously, if you take the Scriptural narrative as a historical one, it definitely records that the children of Israel were in fact often polytheistic in practice.)
And as OliveTapenade points out, this sort of rhetoric (where the other gods are false gods or demons) doesn't gradually disappear, but reappears even in the New Testament. Interestingly (and to Goodguy's question below) my understanding is that some early Christian apologists centered some of their pitch around the idea that the old oracles had begun to die after the advent of Christ, which suggests that they thought a persuasive argument to pagans or post-pagans was "the old gods are out, the One True God has defeated them." (I guess pagans were primed for this, the death of Pan supposedly occurring under Tiberius' reign, chronologically close to the crucifixion of Christ). But in order to make those sorts of arguments, early apologists had to concede the existence of other gods of some kind. So the most maximalist monotheistic idea ("there are no other gods and pagan religious practices are all bunk") isn't really something that you see either in even the New Testament or the early Church.
This seems less like a philosophically significant matter of classification and more like a mere difference in function.
Well sure. But I think we're less likely to reach good conclusions in philosophically significant matters of classification if we are confused about differences in function.
We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.
And while such a device might not have qualia, it makes more sense (to me, anyway) to say that such an entity would have the ability to e.g. touch or see than an LLM.
But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.
In my view, the computer guidance section of the AIM-54 Phoenix long range air-to-air missile (fielded 1966) is fundamentally "more real" than the smartest GAI ever invented, but locked in an airgapped box and never interfacing with the outside world. The Phoenix made decisions that could kill you. AI's intelligence is relevant because it has impact on the real world, not because it happens to be intelligent.
But anyway, it's relevant right now because people are suggesting LLMs are conscious, or have solved the problem of consciousness. It's not conscious, or if it is, it's consciousness is a strange one with little bearing on our own, and it does not solve the question of qualia (or perception).
If you're asking if it's relevant or not if an AI is conscious when it's guiding a missile system to kill me - yeah I'd say it's mostly an intellectual curiosity at that point.
Video game NPCs can't have conversations with you or go on weird schizo tangents if you leave them alone talking with eachother. They're far more reactive than dynamic.
If you leave them alone shooting at each other they can engage in dynamic combat, what more do you want :P
This is a pretty weird, complex output for a nonthinking machine:
I don't believe I ever said that LLMs were not "thinking." Certainly LLMs can think inasmuch as they are performing mathematical operations to produce output. (But then again we don't necessarily think of our cell phone calculator as "thinking" when it performs mathematical operations to produce output, although I certainly may catch myself saying a computer is "thinking" any time it is performing an operation that takes time!)
Sensation is a process in the mind. Nerves don't have sensation, sensors don't have sensation, it's the mind that feels something. You can still feel things from a chopped off limb but without the brain, there is no feeling.
Take a rattlesnake, remove its brain, and then grab its body and inflict pain upon it. It will strike you (or attempt to do so). It may not be "feeling" anything in the subjective experiential sense, but it is "feeling" in the sense of sensing. Similarly, if you put your hand on a hot stove, your body will likely act to move your hand away before the pain signal reaches your brain. I suppose one can draw many conclusions from this. I draw a couple:
-
Sensation, to the extent that it is a process, is probably not a process entirely in the brain - sure, the mind is taking in signals from elsewhere, but it's not the only part of the body processing or interpreting those signals. (Or maybe a better way of saying it is that the mind is not entirely in the brain).
-
Things without intelligence or consciousness can still behave intelligently.
I dispute that the Britannica is even giving me more complex or more intelligent output.
Britannica is probably more complex and intelligent than an equivalent sample-size of all LLM output.
The 'novel tasks' part greatly increases complexity of the output, it allows for interactivity and a vast amount of potential output beyond a single pdf.
Sure, I agree with this. But e.g. Midjourney is also capable of generating vast amounts of potential output - do you believe Midjourney is intelligent? Does it experience qualia? Is it self-aware or conscious? Or are text-based AIs considered stronger candidates for intelligence and self-awareness because they seem self-aware, without any consideration to whether or not their output is more complex? Which contains more information, a 720 x 720 picture or a 500 word essay generated by an LLM?
As I understand it, LLMs use larger training data than image generation models, despite most likely outputting less information - bits - per prompt than an image model. This suggests to me that complexity of output is not necessarily a good measure of (for lack of a better word) intelligence, or capability.
What about the pain people feel when they discover someone they respect has political views they find repugnant? Or the pain of the wrong guy winning the election? The pain of a sub-par media release they'd been excited about? There are plenty of kinds of purely intellectual pain, just as there are purely intellectual thrills.
These things are, as I understand it, mediated by hormones, which moderate not only emotions like disgust and anxiety but also influence people's political views to begin with. These reactions aren't "purely intellectual" if by "purely intellectual" you mean "fleshly considerations don't come into it at all."
Many people who deeply and intensively investigate modern AIs find them to be deeply emotional beings.
I bet if we knew how the human vision process worked we could do things like that to people too.
We can do optical illusions on people, yes. And both the human consciousness and an LLM are receiving signals that are mediated (for instance the human brain will fill in your blind spot). But the process is different.
So they do pass the most basic test of vision and many of the advanced ones.
Adobe Acrobat does this too, with optical character recognition, but I don't think that Adobe Acrobat "sees" anything. Frankly, my intuition is much more that the Optophone (which actually has optical sensors) "sees" something than that an LLM or Adobe (which do not have optical sensors) "sees" anything. But as I said, I don't object to a functionalist use of "seeing" to describe what an LLM does - rather, it seems to me that having an actual optical sensor makes a difference, which is where I want to draw a distinction. Think of it as the difference between someone who reads a work of fiction and a blind person who reads a work of fiction in Braille. They both could answer all of the same questions about the text; it would not follow that the blind person could see.
how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain
I experience pain. The qualia is what I experience. To what degree the brain does or doesn't experience pain is probably open to discussion (preferably by someone smarter than me). Obviously if you cut my head off and extract my brain it will no longer experience pain. But on the other hand if you measured its behavior during that process - assuming your executioner was at least somewhat incompetent, anyway - you would see the brain change in response to the stimuli. And again a rattlesnake (or rather the headless body of one) seems to experience pain without being conscious. I presume there's nothing experiencing anything in the sense that the rattlesnake's head is detached from the body, which is experiencing pain, but I also presume that an analysis of the body would show firing neurons just as is the case with my brain if you fumbled lopping my head off.
(Really, I think the entire idea we have where the brain is sort of separate from the human body is wrong, the brain is part of a contiguous whole, but that's an aside.)
how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input
Well, it's fundamentally different because the brain is not a computer, neurons are more complex than bits, the brain is not only interfacing with electrical signals via neurons but also hormones, so the types of data it is receiving is fundamentally different in nature, probably lots of other stuff I don't know. Look at it this way: supposing we were intelligent LLMs, and an alien spacecraft manned by organic humans crashed on our planet. We wouldn't be able to look at the brain and go "ah OK this is an organic binary computer, the neurons are bits, here's the memory core." We'd need to invent neuroscience (which is still pretty unclear on how the brain works) from the ground up to understand how the brain worked.
Or, for another analogy, compare the SCR-720 with the AN/APG-85. Both of them are radars that work by providing the pilot with data based on a pulse of radar. But the SCR-720 doesn't use software and is a mechanical array, while the APG-85 is an electronically scanned array that uses software to interpret the return and provide the data to the pilot. If you were familiar with the APG-85 and someone asked you to reverse-engineer a radar, you'd want to crack open the computer to access the software. But if you started there on an SCR-720 you'd be barking up the wrong tree.
Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.
I mean - I deny that an LLM can flush. So while an LLM and a human may both convey messages indicating distress and embarrassment, the LLM simply cannot physically have the human experience of embarrassment. Nor does it have any sort of stress hormone. Now, we know that, for humans, emotional regulation is tied up with hormonal regulation. It seems unlikely that anything without e.g. adrenaline (or bones or muscles or mortality) can experience fear like ours, for instance. We know that if you destroy the amygdala on a human, it's possible to largely obliterate their ability to feel fear, or if you block the ability of the amygdala to bind with stress hormones, it will reduce stress. An LLM has no amygdala and no stress hormones.
Grant for the sake of argument a subjective experience to a computer - it's experience is probably one that is fundamentally alien to us.
I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.
"Treating like an object" is I guess open to interpretation, but I think that animals generally are conscious and humans, as I understand it, wouldn't really exist today in anything like our current form if we didn't eat copious amounts of animals. So I would suggest the historical examples are on net not only positive but necessary, if by "treating like an object" you mean "utilizing."
However, just as the analogy of the computer is dangerous, I think, when reasoning about the brain, I think it's probably also dangerous to analogize LLMs to critters. Humans and all animals were created by the hand of a perfect God and/or the long and rigorous tutelage of natural selection. LLMs are being created by man, and it seems quite likely that they'll care about [functionally] anything we want them to, or nothing, if we prefer it that way. So they'll be selected for different and possibly far sillier things, and their relationship to us will be very different than any creature we coexist with. Domesticated creatures (cows, dogs, sheep, etc.) might be the closest analogy.
Of course, you see people trying to breed back aurochs, too.
Or at least they behave as if they're distressed.
Yes - video game NPCs and frog legs in hot skillets also do this, I don't think they are experiencing pain.
Heartbreak can cause pain in humans on a purely cognitive level, there's no need for a physical body
I am inclined not to believe this to be true. Heartbreak involves a set of experiences that are only attainable with a physical body. It is also typically at least partially physical in nature as an experience (up to and including literal heartbreak, which is a real physical condition). I'm not convinced a brain-in-a-jar would experience heartbreak, particularly if somehow divorced from sex hormones.
Past a certain level of complexity in their output, we reach this philosophical zombie problem.
Consider what this implies about the universe, if you believe that it "output" humans. (Of course you may not be a pure materialist - I certainly am not.)
The output is recycled input. Look, let's say I go to an AI and I ask it to tell me about the 7 Years War. And I go to Encyclopedia Brittanica Online and I type in Seven Year's War. And what ends up happening is that Encyclopedia Britannica gives me better, more complex, more intelligent output for less input. But Encyclopedia Britannica isn't self-aware. It's not even as "intelligent" as an LLM. (You can repeat this experiment with a calculator). The reason that LLMs seem self-aware isn't due to the complexity of the output returned per input, it's because they can hold a dynamic conversation and perform novel tasks.
Also, they barely even work at that, more modern image models are apparently immune:
Yes - because modern image models were given special intervention to overcome them, as I understand it. But while we're here, it's interesting to see what your link says about how modern image models work, and whether or not they "see" anything:
computer vision doesn't work the same way as in the brain. They way we do this in computer vision is that we hook a bunch of matrix multiplications together to transform the input into some kind of output (very simplified).
we have no way to know whether some artificial intelligence that humans create is conscious or not Well this is true for a sufficiently imprecise definition of conscious.
With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.
This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)
I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.
Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'.
I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.
Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.
If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.
Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.
Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's the same a similar picture" without special intervention.
It seems that lunar gravity is low enough that what you describe is possible with current materials?
Aha, I must have misremembered the Swedes. But yes his diagnostics are comical by the standards of today's discourse, although I think it makes some sense (basically white people are Anglo-Saxons and close kin). I really wish we had his final verdict on the Irish (who are, as I understand it, the literal palest people in the world).
More options
Context Copy link