This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
You dont really have read/write access to your harddrive either, unless you open it up and look with a microscope. The "direct" access you get as a normal user is just a very reliable introspective report.
Thats because the computer is designed to be understandable and manipulable. Its not the least bit difficult to write a programm or OS that doesnt have meaningful interactable gears for you, and transistor-level analysis is not the best, most efficient way to interact with computers. I mean, we talk a lot about LLMs here, and I dont think they are the same thing as humans, but it seems like they pass an non-mechanical by your criteria.
But you can in fact open it up and look at it with a microscope. Moreover, you can make a new one from scratch with tools, and make it to your exact specifications. You cannot open the mind and look at it with a microscope, and you cannot make a new mind to-spec with tools.
And this is distinct from the access you have working in the hard drive factory. But there is no hard drive factory for minds; the normal user access is all the access any of us have ever observed or confirmed empirically.
The computer is matter. Matter was not "designed" to be understandable and manipulable. It is understandable and manipulable, and so complex arrangements of matter that we intentionally construct with tools generally retain this property. To the extent they lose this property, it is generally because multiplicative complexity accelerates their mechanics from within our grasp to outside it, and we can generally simplify that complexity to make them graspable again. In the same way, we construct LLMs from mechanical components, and to the extent that they lose the predictable and controllable mechanistic nature, it is through the multiplication of complexity to an intractable degree.
We do not construct human minds from mechanical components, and we cannot identify mechanical components within them; we can neither point to nor manipulate the gears themselves. Minds might well may be both mechanical and intractably complex, but the intractable complexity prevents the mechanical nature from being demonstrated or interacted with empirically. Hard Determinism is a viable axiom, but not an empirical fact. The problem is that people do not appear to understand the difference.
We can identify neurons, which are not quite as predictable as transistors but pretty good. I think we can also grow and arrange them controlledly to some extent, though not at the scale of a human brain. We can in fact gears-model simple organisms on an individual neuron basis. So it seems to me that if we are uncertain whether brains are "mechanical but intractably complex", we should be similarly uncertain about LLMs.
I indeed dont understand the difference you make between axioms and inference. Even if we could build brains, couldnt you equally claim that "its axiomatic" whether the non-manufactured ones are also mechanical? If I could predictable control people in a gears-model way, are they still mechanical while Im not looking? Is it actually an illusion and I can actually only "control" them into doing things they would do anyway, even though I feel like I could have chosen anything? Whos to say that I have a 1/6 chance of dying when I spin the revolver and put it to my head, just because everyone else does?
More options
Context Copy link
I do not believe this is the problem here - the problem is that your explanations for the current gap in Hard Determinism that is the lack of user-friendly brain interface are, in their structure, no different from explanations that had at various points been raised against other gaps that are resolved by now.
Resolved by you yourself, in the case of comparing LLMs to human brains! We know the building blocks of LLMs, and have the control capacity to inspect and manipulate their state in less complex iterations, but not in more complex ones. We know the building blocks of organic chemicals, which resolve to DNA, which resolve to live cells, some of which are neurons, and the earlier less complex iterations of those structures we can not only predict but manipulate and recreate. Nondeterminism simply does not make a convincing enough case that the latest iteration, the live human brain, is somehow so qualitatively different from a silicon-based neural network that hoping to grasp it with determinism is hopeless hubris.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link