This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
This clears up my confusion. I agree with you in that, the current evidence of generic LLMs is that they lack "sophisticated" models of chess, for some reasonable definition of "sophisticated." Now, whether or not that means it's missing an important ingredient of human-level intelligence or can't be conscious, I don't know, and I'm not sure how anyone can know. What seems very likely to me is that, lacking a "sophisticated" model of chess (or the world, or social life, or physics, or etc.), it's lacking an important ingredient of human-emulating or human-like intelligence, but that doesn't imply that it lacks human-level intelligence. In terms of consciousness, I think the Hard Problem remains Hard.
Perhaps most people would agree with that - I might, depending on what you mean by "internal representation" - certainly I doubt that the computer would have a model that could trivially show an accurate representation of each of the 64 squares and where each of the 32 pieces sit on the board and whose turn it is. But I'd say that doesn't mean that the computer isn't modeling the game or that it doesn't have some sort of internal model of the chessboard. It's just a wrong model, one that is far wronger than any typical human would have, and one that is wrong due to bizarre mistakes that no stupid human would commit.
Can you explain to me the difference between human-emulating and human-level?
Human-emulating would be reaching conclusions and decisions through a process that is similar to how humans do in some way. The most obvious way would be that it follows some sequence of "thoughts" that a typical human could look at and honestly think, "That's similar to how I might think through this." Another way might be if it literally emulates our entire brain, possibly down to the sub-atomic particles.
Human-level would simply be if it's able to pass intelligence tests (any that you could come up with, including IQ, but also things that might involve social awareness or performance in physical tests) at a rate similar to humans. How it accomplishes this wouldn't matter; perhaps tomorrow we discover that God is real and He can be communicated with via a new antenna we developed. Then we put that antenna on a computer and tell it to ask God what to do, in order to behave as intelligently as a human, and God in His great benevolence, decides to answer accurately. That computer would have human-level intelligence, but certainly not human-emulating.
It seems to me that there's not much difference. Part of an intelligence test given to a (putative) human-level intelligence would be to ask it to perform human emulation.
In any event, the definition I chose (for the sake of argument) for the consciousness question follows that of William Poundstone. Which pretty much requires sophisticated models for an entity to be conscious.
Passing that test wouldn't indicate human-emulating intelligence, and human-level intelligences exist that would fail that test, though, because emulating humans is something that humans don't really do via their human-level intelligence; they behave as humans, which appears like emulating humans (but isn't emulating humans). Furthermore, even if some alien intelligence were able to emulate humans, that wouldn't actually give us any insight into how it uses its alien intelligence to produce behavior (or even thought) that emulates humans. We see a small version of this right now with LLMs in "Chain of Thought mode" where we instruct LLMs to work out thoughts in logical sequence similarly to how a human might think it out. There's no way of knowing if the conclusion that the LLM reached following some chain of thought was actually due to that chain of thought, or if it was some separate process that produced both that chain of thought and the conclusion.
E.g. if you present some question like, "In this universe, all dogs are blue. Jim is a dog. What color is Jim?" A human might think, "Logically, because all dogs are blue and Jim is a dog, it follows that Jim has all the characteristics that a dog must have. All dogs must be blue. Therefore Jim is blue!" An LLM with COT might produce that exact same text and conclude "Jim is blue," but we have no insight into the actual "thinking" that the LLM followed to conclude this. A human might model this universe as one in which all dogs are blue, and Jim is an individual dog, which must be blue, therefore Jim is blue. We have no way of knowing what model the LLM has of this universe, other than the fact that it produces the text "Jim is blue" as an answer to that question.
An intelligence being able to emulate a human, in no way, indicates that the intelligence is formed in a human-like way. Though it certainly proves that the intelligence is at least human-level (since it can always just emulate humans to reach human-level).
I think I misunderstood your point. It seems you are saying that if an entity successfully emulates human intelligence -- in terms of output -- it's not necessarily human emulating in the sense that it may have used entirely processes and procedures entirely unlike those of human beings to reach that output.
Do I understand you correctly?
I believe that you understand me correctly. It's like how a PS3 can emulate a NES, but the underlying circuitry of the PS3 is actually very different from that of an NES. And in the future, if scaling up LLMs and making them faster were able to create something that was truly indistinguishable from a human in terms of its chain of thought and speech and perhaps even actions of an attached android (none of this is guaranteed to happen, of course), this wouldn't indicate that the LLM was human-emulating intelligence. Rather, it's an intelligence that is emulating how a human thinks and behaves, but the underlying intelligence that allows it to emulate human thought would still be that of an LLM, which, as far as we can tell right now, isn't the result of emulating humans.
Ok, and it seems your position is that -- possibly -- LLMs (or computers in general) could achieve human level intelligence without the use of sophisticated models. In which case they would have human-level intelligence without human-emulating intelligence. Right?
Yes, but it depends on what you mean by "sophisticated" models. Certainly the models would have to be complex, detailed, accurate, and precise, at a level similar or equivalent to the models we humans use in our heads. But the models would likely be utterly incomprehensible to us humans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link