This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
We've only known that tigers are made out of atoms for a few hundred years. That is a fact of interest to biologists, I'm sure, but everyone else was and is well-served by a higher level description such as "angry yellow ball of fur that would love to eat me if it could." The point is that that the more reductionist framework doesn't obviate higher-level models. They are complimentary. Both models are useful, and differentially useful in practice.
(The tiger could be made out of 11-dimensional strings, or it, like us, could be instantiated in some kind of supercomputer simulating our universe, as opposed to atoms. This makes very little difference when the question is running zoos or how to behave when you see one lurking in your driveway.)
You think that LLMs being a "bunch of weights" makes ascribing a personality to them somehow incorrect. I don't see how that's the case, any more than someone arguing that humans (or tigers) being made of atoms precludes us from being conscious, being minds or having personalities, even if we don't know how those properties rise from atoms.
He's not Donald Goose, is he? Jokes aside, I'm not sure what the issue is here. Donald Duck on your TV is a collection of pixels, but his behavior can be better described by "short-tempered anthropomorphic duck with an exhibitionist streak" (he doesn't wear pants, probably OK principle).
If you sit down to play chess against Stockfish, you can say "this is just a matrix of evaluation functions and search trees." You would be correct. But if you actually want to win, you have to model it as a Grandmaster-level opponent. You have to ascribe it "intent" (it wants to capture my queen) and "foresight" (it is setting a trap), or you will lose.
My point is basically endorsing Daniel Dennett's "Intentionalist" stance. Quoting the relevant Wikipedia article:
As I've taken pains to explain, conceptualizing LLMs as a bunch of weights is correct, but not helpful in many contexts. Calling them "minds" or ascribing them personalities is simply another model of them, and one that's definitely more tractable for the end-user, and also useful to actual AI researchers and engineers, even if they're using both models.
Note that this conversation started off with a discussion about using LLMs for coding purposes. That is the level of abstraction that's relevant to the debate, and there noting the macroscopic properties I'm describing is more useful, or at least adds useful cognitive compression and produces better models than calling them a collection of weights.
I will even grant the main failure mode: anthropomorphism becomes actively harmful when it causes you to infer hidden integrity, stable goals, or moral patience, and then you stop doing the boring engineering checks. But that is an argument for using the heuristic carefully, not an argument that the heuristic is incoherent. As far as I'm aware, I don't make that kind of mistake.
No, I don't. I can just think about the best move to play given the conditions on the board and my own knowledge of chess. In fact, I'd believe that is what most chess players do. If you get into the mindset of "Okay, I have to model Magnus' mental model of the chessboard so that I can preemptively counter him" you're playing against an incomplete set of data built on a lot of assumptions. It's classic autist overthinking when the real data is the board in front of you.
Miss me with that new atheist bullshit. This a guy who would trust The Science (TM) because of its rationality and empircism. You know, two philosophical stances that have no holes in them whatsoever.
From your quote of him;
Lol, what. Why do you think there's a bias towards open source or reviewing source especially in security communities? You want to know the structure and design of software to ensure it's performing as expected and safely. The various "neuro" fields (neuropsych, neurobiology, neurochemistry) are all about doing the best we can to understand the incredibly complex structure of the brain and, from it, how "mind" might emerge. Dennett comes along and hand waves it all away - "not necessary!".
It's not conceptualization, it's definition. That's what they are. This is like saying "you can conceptualize a pair of dice as plastic cubes, but, really, they're living, breathing probability gremlins."
More options
Context Copy link
Neither here nor there, but I vividly remember one of the 90s TV cartoons having a whole episode B-plot about Donald having a dark family secret he was trying to keep buried, and it turned out to be that he's actually a goose. Not Ducktales, Donald was hardly in that. Maybe Quack Pack? House of Mouse? One of those.
More options
Context Copy link
No. When top GMs talk about how they play against computers, they clearly treat it in a significantly different way than how they treat humans. They know what kind of things are included in the evaluation function, like the 'contempt' factor, that can cause it to sometimes behave in non-human ways. They know that it is a perfect calculator (or at least as perfect as it's set to be, so often they're trying to probe how it's set to be), and that colors the way they think about positions and how they choose to spend their own time calculating.
One might occasionally anthropomorphize in terms of "it wants to capture my queen", just because that's easy to do, since one is so used to talking about human opponents in that way. But this is done even when one is not playing against any entity, human or silicon. Take, for example, the process of solving a puzzle. This is just purely a practice exercise. There is no human, no evaluation function or search tree, no model weights (many modern engines also use NNs) actually sitting on the other side of the board making actual moves against you. Sometimes, those puzzles are from actual games, so you can at least see what one other human thought. Sometimes, they have annotations for other lines, so you can see additional thoughts from other humans. Sometimes, they're computer checked (or you check it yourself), so you can see what compy "thinks" (computes). But fundamentally, you're just thinking game-theoretically, which requires you to think about two different (opposed) value functions. Some 'puzzles' aren't even puzzles; they're just evaluation exercises. "Here's a position, what do you think about it?" There's no actual entity on either side. But imprecisely thinking, "What does black 'want' here," "What does white 'want' there," is almost universally helpful, if not mandatory, just to keep in our mind the tension between differing payoff functions and how they interact.
I've done a fair amount of game theory, and it's natural to anthropomorphize purely abstract payoff functions, no model weights or neurons or anything required. When I'm working with new students, it takes work to get them to be able to reason about them, so it's an extremely helpful crutch to regularly poke them with, "...and suppose that player did what you're proposing; now, imagine you're on the other side; how would you respond?" And so, you just sort of get used to imagining a human-like (or for many of my purposes, a human augmented with computational resources) entity on each side, actually thinking in a self-interested way.
But back to GMs playing computers. They've been thinking this way for decades. Sometimes with actual humans on the other side, sometimes just a puzzle, whatever. They've honed the skill of rapidly thinking right past the step of, "What would I do if I were on the other side at that particular moment?" And these days, top GMs are pretty comfortable distinguishing between the different ways that engines "think" about positions. Watch a few of Hikaru's many many videos where he plays against a bunch of different bots. He very clearly understands that they're evaluation functions and search trees, and different combinations of evaluation functions and search trees of varying lengths have different strengths and weaknesses. He still regularly plays variations of 'anti-computer chess' where he's 100% banking on there being a significant difference between modeling it like a particular evaluation function with a particular set of search tree parameters (potentially also with a particular opening book/endgame tablebase) and modeling it like a GM-level human opponent.
More options
Context Copy link
More options
Context Copy link