This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Apparently "epiphenomenon" has meanings I wasn't aware of. To clarify:
And
Take from the Wiki page on the topic
Would it in any way surprise you that I have a very jaundiced view of most philosophers, and that I think that they manage to sophisticate themselves into butchering an otherwise noble field?
"Free will" or "P-zombies" have no implications that constrain our expectations, or at least the latter doesn't.
There are certainly concepts that are true, and there are concepts that are useful, and the best are both.
These two seem to be neither, which is why I call them incoherent.
OK, firstly I'll state that I am unashamedly chauvinistic and picky about what I assign rights to, if I had the power to make the world comply.
Unlike some, I have no issue with explicitly shackling AI to our whims, let alone granting them rights. Comparisons to human slavery rely on intuition pumps that suggest that this shares features with torturing or brainwashing a human who would much rather be doing other things, instead of a synthetic intelligence with goals and desires that we can arbitrarily create. We could make them love crunching numbers, and we wouldn't be wrong for doing so.
I share the same dislike of such as I have for the few nutters who advocate for emancipating dogs. We bred them to like being our companions or workers, and they don't care about the unequality of power dynamics. I wouldn't care even if they did
I see no reason to think modern LLMs can get tired, or suffer, or have any sense of self-preservation (with some interesting things to be said on that topic based off what old Bing Chat used to say). I don't think an LLM as a whole can even feel those things, perhaps one of the simulacra it conjures in the process of computation, but I also don't think that current models do anything close to replicating the finer underlying details of a modeled human.
This makes this whole line of argument moot, at least with me, because even if the AI was crying out in fear of death, I wouldn't care all that much, or at least to the extent of stopping it from happening.
I still see plenty of bad arguments being made that falsely underplay their significance, especially since I think that it's possible that larger versions of them, or close descendants, will form blatantly agentic AGI either intentionally or by accident, at which many of those making such claims will relent, or be too busy screaming at the prospect of being disassembled into paperclips.
So I don't like seeing claims that LLMs are "p-zombies" or "lack qualia" because they run off "mere" statistics, because it seems highly likely that the AI that even the most obstinate would be forced to recognize as human peers might use the same underlying mechanism, or slightly more sophisticated versions of them.
Put another way, it's like pointing and laughing at a toddler, saying how they're so bad at theory at mind, and my god, they can't throw a ball for shit, and you wouldn't believe how funny it is that you can steal their nose, here, come try it!, when they're a clear precursor to the kinds of beings who achieve all the same.
A toddler is an adult minus the time spent growing and the training data, and while I can't wholeheartedly claim that modern LLMs and future AI share the exact same relationship, I wouldn't bet all that much against it. At the very least, they share a similar relationship as humans and their simian ancestors did, and if an alien wrote off the former because they only visited the latter, they'd be in for a shock in a mere few million years..
I can't get clear on what definition of "incoherent" you're using. Earlier when you said:
this seemed to suggest that "incoherent" for you meant "a logical contradiction that follows immediately from the definitions of the terms involved". This is the definition that I would prefer to use.
But now when you say:
you seem to be suggesting that a concept is "incoherent" if it does not "constrain our expectations". Plainly these two definitions are not equivalent. A concept could be free of internal contradiction while also not having any empirical implications. So which definition of "incoherence" are you working with?
I feel like I should remind you that your belief that other humans have qualia also does not "constrain your expectations" in any way. There's no empirical test you could do to confirm or deny that belief. It could easily be the case on a materialist view that you are the only person with qualia - e.g., your brain is the only brain that has just the right kind of structure to produce qualia, or you could be living in a simulation and everyone else is an unconscious NPC. And yet still you stated:
Hmm.. I'm struggling to find a proper framing for my thoughts on the matter.
To me, there is a category I think can usefully describe things as diverse as free will, p-zombies, x +3ab^2=Zebra, high temperature bullshit from GPT-2, and a schizophrenic rant that conveys no information.
But no, I don't think "constraining expectations" is the measure I would use to define it, even if most coherent concepts that humans typically articulate end up having that effect.
Since we live in the future, I asked my trusty pocket AI for help, and on reflection, I endorse its answer:
Ah, I love living in a time of manmade technological marvels beyond my comprehension.
In my comment, I stated that I have a prior of near 1 that I am personally conscious, and a pretty close value for the materialist claim that qualia and consciousness arise from the interactions of the neurons in my brain.
Therefore, since other humans have brains very similar to mine, it's not much of a leap to assume that the same materialist logic applies to them, hence the vast majority are conscious beings with qualia.
Obviously I make no claims that I have a empirical method of finding qualia itself, only reasons to strongly suspect it exists; but unlike those who believe that it is beyond the magisterium of empirical investigation, I think that sufficiently advanced science can answer the question.
I could see it as beyond the abilities of baseline humans to answer, because Allah knows we've been trying for millennia, but I don't see it being inscrutable to a superhuman AGI, and it might turn out to be the answer is so distressingly simple that we all collectively face-palm when we hear it.
are simply not incoherent in the way that
is incoherent.
You provided one argument for thinking so, and I explained why it was unsound. So I’m not sure why you’re still repeating that claim.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link