we shouldn't normalize asking people to lie... That's just a demand for large-scale hypocrisy.
How many people not on the politically engaged or online right could have told you a single thing Charlie Kirk had said before his murder made him famous? How many non-very-online liberals/leftists had even heard of him? Very few I think, and yet that didn't stop the ones I know from calling him a fascist, and being more vocally concerned about Jimmy Kimmel's brief cancellation than about what one gathers was a sympathetic if (maybe, quietly) regrettable episode of political violence. Why did they "believe" he was a fascist? Either because they had uncritically imbibed one of the many blatant misrepresentations or outright lies about the things he had said circulating in their media environment, or simply because they had been told to by trusted sources. Most people (right and left) have an appallingly low tolerance for cognitive dissonance. If their side appears to have murdered an innocent man, either it wasn't really their side (remember the attempt to pin it on the groypers?), or he wasn't really innocent. Or just as often, an incoherent superimposition of both.
To accuse someone of hypocrisy presupposes an internal distinction between higher-order moral principles and actions or judgments of narrower scope. But it's risible to speak of higher-order principles when people are so easily manipulated; and most commentators hardly make any pretense of having considered all the facts relevant to any "controversial" case (the controversy generally being between sides, not within them, let alone within individual minds). The most unambiguously universalist slogans are subject to casuistry, and even the word "casuistry" gives too much credit, because case-by-case reasoning is still a form of reasoning. Imputing an autonomous rational intellect to people, such as would be necessary for them to be truly hypocritical, more often than not impedes one's understanding of and ability to predict their verbal outputs. What the response to Kirk's assassination brought home to me wasn't that liberals have a surprisingly robust anti-fascist value system, such that even Kirk, who (to those on the right who were aware of him) epitomized the moderate religious faction, was beyond the pale; it was that their beliefs -- not only, but to be honest, yes, especially their beliefs -- are not even really beliefs, not even the ones they would most readily ascribe to themselves in a vacuum; that the danger is best understood on an impersonal level, because it's misleading to model most of the people in question as people. I don't respect their "hatred" of Charlie Kirk enough to expect them to be tactful about it. Their hatred is just a boulder in an avalanche started by someone out of sight. Will I "tolerate" my difference of opinion with the boulder as it comes hurtling towards my face? No, but I won't think of it like that in the first place, and I know better than to expect the boulder to suspend the law of gravity for my sake.
- Prev
- Next

But they aren't the same answer. In the one possibility, the Chinese Room is conscious. In the other, it's not. What does it mean to be conscious? It means having subjective experience, which is by definition not something that can be directly probed. But that doesn't make the distinction meaningless. I know what it's like for the lights to be on. It's easy for me to imagine a copy of myself that behaves almost exactly like I do without the lights being on, or to imagine that for others, the lights are not on. It so happens that I will never be able to test whether that's the case, but, if there is a real world, then there is a fact of the matter as to whether others have subjective experience. In principle, it's no different from any other phenomenon where the fact of the matter is out of reach, like whether the cook spat in your food. The only difference is that the class of first-person phenomena, which are undeniably real (do you not see the color red?) and undeniably beyond the ability of at least current science to explain in material terms (can you show me the equation that has a bunch of terms representing physical magnitudes on the left side and the color red on the right side -- not simply an array of numbers representing wavelength or a pattern of neuronal activity, but the actual color red as perceived by humans?), are such that they can't even hypothetically be overcome in the same way that you could, hypothetically, find out whether the cook spat in your food if there happened to be a security camera in the kitchen (or something).
And you would still have to postulate that ingredient, because you would still not have that equation with the color red on the right side. Which is inconvenient to the point of view that materialist science should be able to enumerate all the constituents of reality, but wishing doesn't make the problem go away. There is, in fact, something left over. And even that's underselling it. The thing that's left over is the absolute most basic element of experience -- not just an element of experience, experience itself. A few minutes' reflection should be enough to dispel the naive belief that the tools of science, which have been very successful in describing patterns in what we see, should also be able to explain that we see. No, science has not brought us one step closer to that. No, no neurobiological revolution will unlock the red crayon that would permit scientists to fill out the right side of the equation. Sorry! Being a hard-headed realist means accepting this. Instead, you posit the philosophy-addled metaphysics according to which there is a world-out-there, but one that's beholden to human epistemology at its deepest ontological level. "If we can't tell the difference between two states of affairs they must be the same, otherwise it wouldn't be fair!" Tough luck, kid. Life isn't fair.
This is just word games. The "100% understanding" means a complete mechanistic understanding of the process by which someone answers "red" when asked the color of a presented apple, from the reflection of 700nm light off the apple, through the retina, optic nerve, visual cortex, etc., and eventually the relevant motor nerves. It's not a problem for the thought experiment that the actual color red, as you and I understand it, won't show up anywhere in this decomposition, and that Mary's understanding of redness is therefore incomplete despite all appearances. That's just what it's trying to show. The name of the paper where the thought experiment was proposed is literally Epiphenomenal qualia. For qualia to be epiphenomenal means that they are, somehow, the output of physical processes, but not the right kind of thing to serve as input to other physical processes. (Except certain physical processes implicated in discussions of consciousness themselves, which is a hole in the theory. They don't call it the hard problem for nothing.) It's like Carl Sagan's invisible dragons, if every thought and experience of the world you had ever had were somehow inextricably and unmistakably predicated on the fact that you were yourself an invisible dragon.
True. Again, this is why they call it the hard problem. There are no cheap answers. The one answer that's decidedly worse than the rest is the one that simply denies the fact of first-person experience or pretends that it's somehow trivially obvious that such-and-such an algorithm or whatever should be accompanied by awareness/perceptual experience. That just seems dishonest. It's a more flagrant departure from common sense than you'll find literally anywhere else in philosophy, ever, anywhere, and that's saying a lot. It isn't parsimonious to throw out consciousness in order to have a more unitary understanding of the things that appear in consciousness. Experience obviously precedes theorizing about experience. Experience of the world obviously precedes any understanding of the world. And yet, one hesitates to say that experience precedes, and thus obviates, the world itself. It's almost as if any experience of reality has a dual character: the experience, and the reality. Solutions to the hard problem don't "smuggle in" dualism; they simply recognize this screamingly obvious enigma, and try (and fail) to resolve it. The most commonsense "concept of a plan" may be panpsychism: all phenomena are "material" in the sense that they arise from matter, but matter has both a physical dimension and an experiential dimension. With panpsychism, you can, in theory if not in practice, derive that equation with the color red. It would still be dualistic, though.
So in a vacuum, it's no longer the case that 2+2=4 or that "If A then B, A, therefore B" holds? Actually, forget the vacuum. What do you mean by "manipulations"? Do you mean humans manipulating symbols/independent variables? If so, were math and logic not features of reality before humans began to use them? I have a lot to say about the ontological nature of math and logic, but you need to make your position clearer before I can respond to it.
Agreed, both for the reasons you state and because free will is internally incoherent. Suppose A and B are both given the marshmallow test. A passes, B fails. What's the difference? Let's say that A was able to delay gratification because he has more willpower. OK, so A's choice was constrained by his having willpower, and B's was constrained by his lack of willpower. Those constraints are just like any other deterministic constraint. (It might be odd to call "having willpower" a constraint, but it does constrain A's choices: you can't say that A was free to have more or less willpower, but became responsible for his having greater willpower by an act of will, in turn made possible by his... greater willpower -- without infinite regress, so it must be posited as a causally efficacious background condition for which A is not (ultimately) responsible.) No matter how else you explain the difference, in order for it to count as an explanation, you must posit some such constraint. Any leftover difference will necessarily be inexplicable. So any choice carried out according to one's "free will" will be inexplicable, essentially random, and few people would be willing to dignify randomness with the term "free will". Unlike the hard problem of consciousness, there is no mystery here, because the sense that we have free will is something that we might be mistaken about, whereas the sense that... we have any sense at all, is not.
(In other words: if free, then unconstrained; if unconstrained, then random; and randomness ≠ free will as naively construed.)
What do you mean by "needed"? Needed to explain base reality? No moral framework is needed for that, consequentialism no more so than the other two. Needed to cultivate what would generally (across times and places) be regarded as a high standard of morality? No. Needed to characterize the moral systems of individuals and societies? No: some people believe, or act as though they believe, in non-consequentialist moralities. Or, take my case: I don't "believe" in any kind of morality. What I mostly have is feelings that particular states of affairs (or actions) would be good or bad -- not even good or bad, as those are post hoc characterizations of my feelings -- rather, some specific feeling in each scenario compels me to act in a certain way, or turns my thoughts in a certain direction. Such feelings are the basis of morality for everyone. Sometimes I reason about my feelings, but I never delude myself into thinking that in doing so I uncover "moral truths"; and even if I discover a "contradiction" in my feelings, I accept both feelings and the contradiction along with them, as no feelings can ever really be in contradiction the way two propositions can be; after all, I did have both of them. Sometimes people, myself included, feel moved to assent to higher-level principles that purport to govern the way they should feel about more atomic situations. Some of those principles, really just since the 19th century but with some proto-examples here and there, are about universal states of affairs, as in consequentialism, although as you might have guessed, I tend to find those the hardest to take seriously in that they're the remotest from the feelings that undergird all morality. Insofar as consequentialist arguments do sometimes resonate with me, that resonance coexists or alternates with other resonances across the spectrum of abstractness. So where does the privileged position of consequentialism come from? What is it uniquely needed for? Because to me, its main utility seems to be to empower nerds to pretend to reason rigorously/quantitatively about morality to the point that they forget that what they are reasoning about is a completely artificial construct that explains nothing.
(My favorite example of this is Sam Harris's quest many years back to solve ethics (such was his framing) by pinpointing the neural correlates of happiness. (Phase 2 was going to be to figure out how to configure society so as to maximally stimulate them, or something like that. Yes, it really was that retarded.) "Doesn't assuming that good = happiness sidestep... all of ethics?" Yes, but you see, what matters is that we scientists have answers, and it's your fault for asking the wrong questions.)
Some people really are virtue ethicists. Traditional Christian morality treats virtues as ends in themselves. Probably a resurrected medieval theologian wouldn't deny that virtuous conduct tends to manifest in benefits for the self/community, but they would still choose a world of maximally virtuous people who experience perpetual suffering over a world of unvirtuous people who experience perpetual bliss. Evil people experiencing bliss might even be a bad thing in its own right. Consequentialists can fold that in by stipulating that they believe in utilitarianism for the good and reverse-utilitarianism for the evil, but then they have to give an account of good and evil in non-consequentialist terms. There is of course a trivial sense in which all conceivable morality is consequentialist -- "According to my world model, which currently extends only as far as the room I am standing in, and my constantly fluctuating value function, which assigns a high negative value to the unpunched face of the guy who just insulted me, it would improve the global state of affairs for me to punch him in the face" -- but that's boring, akin to the trivial sense in which every action is selfish.
This comment is long enough, so I'll leave it there.
More options
Context Copy link