site banner

Culture War Roundup for the week of December 26, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

I think supra-individual mental structures are only deserving of power inasmuch as they increase human freedom, with freedom imprecisely defined as the capacity to make diverse and spontaneous choices.

What makes a choice "diverse" and "spontaneous"?

Well, here's why it's an imprecise definition.

The point isn't really spontaneity, it's just one proxy metric of unpredictability. The point is non-reduction of those properties, not having a person run into cul-de-sacs where he dies or is reduced to some short, gimped algorithm, and not having sections of the world irreversibly closed off to him.

My idea of life and freedom, were I to succeed in rigorously defining it, would probably be similar to «empowerment gain» in this theoretical ML paper (did I mention already that software engineering is applied philosophy and computer science is just philosophy?):

We introduce a physiological model-based agent as proof-of-principle that it is possible to define a flexible self-preserving system that does not use a reward signal or reward-maximization as an objective. We achieve this by introducing the Self- Preserving Agent (SPA) with a physiological structure where the system can get trapped in an absorbing state if the agent does not solve and execute goal-directed polices. [...] The valence function can then be used for goal selection, wherein the agent chooses a policy sequence that realizes goal states which produce maximum empowerment gain. In doing so, the agent will seek freedom and avoid internal death-states that undermine its ability to control both external and internal states in the future, thereby exhibiting the capacity of predictive and anticipatory self-preservation.

Stoffel the Honey Badger was the star of a 2014 PBS documentary called Honey Badgers: Masters of Mayhem, in which he is shown performing impressive escape routines from his pen, Badger Alcatraz [1]. This was all to the astonishment and annoyance of the caretaker Brian, who constantly had to remove the items and resources that Stoffel used to open gates and jump over walls. If there was a tree in the middle of the pen, Stoffel would climb up it and sway it in the direction of the wall for a timed leap. Remove the tree and Stoffel would find novel objects like a branch or a rake, or he would unearth stones to position next to the wall to climb up. And if those were taken away, Stoffel would pack mud into balls and stack them into a climbable pyramid. What else can honey badgers do? If there is food in a box, they can move objects under it to climb up close enough to reach it [2]; and, if there is a gate with a latch, Stoffel and his girlfriend Hammie can coordinate to undo the latch mechanism and open the door. Not only do honey badgers complete these tasks with clever reasoning, they do so potentially with a variety of possible motivations: for satisfying hunger, or for expanding the capacity to move into new external territory, or perhaps, much more speculatively, for the pleasure of trolling Brian by acting in a way that defies his preferences.

... We argue that the problem of machine wanting, and the process state-justification, can be addressed by empowerment gain maximization in the Cartesian product space of SPA’s coupled internal and external transition operators (which we call product-space valence), where the controllability of the product space must be maintained or expanded to resist collapse. Formally, empowerment is the n-step channel capacity of a transition operator, and quantifies the maximum information an agent can transmit from its actuators to resulting states—a kind of controllable optionality [32]. In this paper, the difference in empowerment over a long course of action will be quantified by a valence function V, which is a function of the agent’s state and internal organization in the form of a hierarchical transition operator. A potential criticism of empowerment is it only results in increased optionality, but it does not dictate what goals to work on. This might be true if empowerment is computed on a single flat state-space when reward functions are considered to constitute a task. But as we will show, in hierarchical state-spaces, reward-free goal-directed motivation is entailed by empowerment maximization especially when the agent’s empowerment in the hierarchy can collapse over time in the absence of planning. For instance, if an agent gets hungrier over time, eventually there will come a point in which the agent cannot move around to perform tasks. Putting the computational work into achieving goals that transform some other (physiological) state-space then becomes an imperative. Empowerment gain in a hierarchical space can thus be thought of [as] the contraction or expansion of an agent’s capacity to control other internal and external state spaces, much in the same way a lieutenant might sense the internal contraction of his or her capacity to perform learned skills and tasks in the world after a demotion, or how Stoffel might sense the expansion of his capacity to access parts of the world and procure objects, food, and mating opportunities external to Badger Alcatraz if he were to escape—these are computations that propagate information across a hierarchy of state-spaces.

Empowerment is a measure of an agent’s capacity to predictably realize a variety of future state outcomes from a given starting state xt.

You get the idea.

Doesn't determinism, or wave function being linear, or something like that, mean this metric doesn't physically work? i.e. the actual states, or microstates, or continuous-state-space, or whatever, doesn't distinguish between "you are in a cage and wiggle a bit and that disturbs the air atoms" and "your army crushes the other army". i.e. in order to say which macrostates are more interesting than other macrostates you just ... restate the original question

I agree with 'good ~ power, capability, complexity, accomplishment in the far future', although without the 'self' or 'agent' sense. But a physical measurement of that, in a 'here is the goodness number' sense, doesn't work

My idea of life and freedom, were I to succeed in rigorously defining it, would probably be similar to «empowerment gain» in this theoretical ML paper

This doesnt do what you want it to do.

First, its defined in terms of what the agent could do in the future, not what it will actually do. So if the pothead could do something productive but had his values shifted to where he doesnt want to any more, that wouldnt limit his empowerment in the sense defined there.

Also its defined for discrete finite outcome states, and adapting it to the continuous case requires an additional parameter, most simply a measure on the outcome space which tells you how valuable granularity of control is in different parts of that space relative to each other.

It does not address the value drift, yes. But on the other hand I do not think humans can have a capacity intact despite completely giving up practice, so a close variant of this approach that accounts for a humanlike decaying architecture would compel the agent to sometimes check if the capacity is still available, no?

It depends. Presumably you can also regain the capacity by practicing it again, for example, and in that case the longer time-horizons wouldnt care it went away. And if you set it up in a way where it did matter, then probably your capacity to slavishly obey someone would matter in a similar way. The formalism youve found just isnt particularly related to your problem, and if you find a way to make it do what you want it will be mostly your additions that are doing that work.