@ChestertonsMeme's banner p

ChestertonsMeme


				

				

				
0 followers   follows 0 users  
joined 2022 September 10 06:20:52 UTC

				

User ID: 1098

ChestertonsMeme


				
				
				

				
0 followers   follows 0 users   joined 2022 September 10 06:20:52 UTC

					

No bio...


					

User ID: 1098

California must stand on the right side of history.

I'm surprised to see this expression used unironically. How does future consensus opinion make an act morally right? But I suppose it's consistent with the idea that past actions can be judged by current moral standards.

There are many domains where hidden motives could make for a fun and educational experience.

  • College admissions. You have to craft a student body that maximizes the prestige of the university, using only policies that ostensibly achieve other more laudable goals.
  • Corporate hiring (similar to college admissions).
  • Sims but you're graded on your people's social status. Choices have to have plausible deniability. If your subject doesn't claim to find driving fun, you can't give them a Ferrari without a status penalty for being a phoney or nouveau riche. (I don't play The Sims so for all I know it already works this way.)

There is a lot of opportunity in well trodden game types to introduce new targets or mechanisms.

  • Urban planning. People are unhappy if they live close to much richer people and feel envious every day. You have to minimize the local Gini coefficient across the whole city. Using policies with plausible deniability of course.
  • Traffic design that minimizes envy and resentment. Different modes getting privileges (e.g. a lone bicyclist getting a green light ahead of 50 cars) makes people unhappy.

I am continually astonished by the cruelty of other people, often practiced under the pretense of standing up to bullies.

Could you give some examples? This sounds similar to Jonathan Haidt's ideas in The Coddling of the American Mind (safetyism, call-out culture, etc.) but it could also be completely different.

I have close experience with several children who were homeschooled for a while and it did not go well, mainly because the homeschool teachers in these cases weren't on top of things. If your wife (whom I presume would be the teacher) is conscientious and organized then the academic curriculum should be easy going. As far as the curriculum, don't choose one that requires children stay "at grade level", where "grade level" is a one-size-fits-none affair.

For my own kid, I considered homeschooling them as a way to preserve their enthusiasm for learning. They can move at their own pace and learn things that are interesting to them. We haven't homeschooled (yet) mainly because their current school is really great at tailoring the curriculum to be interesting and challenging for each child. Also, there's no conscientious parent to be the teacher.

I do think the social interaction in school is important.

I am on the fence as far as whether the social interaction kids get in school is useful. School is kind of like prison, in that you're thrown in with people you don't necessarily like and you can't leave. Real life is very different; you can usually curate your social environment much more. The things you can get away with in school would get you booted (or dropped) from most social environments as an adult. And you're not necessarily learning how to be valuable, just how not to get expelled.

Looking for reading recommendations on social status and group formation.

Some claims along the lines of what I'm looking for (arguments or evidence for or against these claims):

  1. Social status basically is a person's value to a group.

  2. Different groups can value someone differently, so there's not necessarily a notion of 'true' or global social status.

  3. It's forbidden (or at least, low-status) to talk about status explicitly.

  4. People can prove their high status by being magnanimous towards lowly people. Someone of lower status faces more of a threat from the next rung down so they can't safely praise lowly people.

  5. People who are more productive (in ways the group cares about) have higher status.

  6. People whose roles relate to the sacred (doctors for example, who save lives, which are sacred) have higher status.

  7. The sacred is a big part of what forms group identity, differentiates in-group vs. out-group members, and helps groups persist over time.

I'm particularly looking for books or essays that frame these things in terms of game theory or economics. "Sociology for systematizers" if you will.

This question is mostly aimed at @wlxd based on this comment but maybe someone else also knows the history. What was Margaret Hamilton's actual contribution to the Apollo guidance computer code?

She's famous now for being the "lead software engineer of the Apollo project," which seems like a stretch based on most biographical summaries available on the web. Nasa credits her as "leader of the team that developed the flight software for the agency's Apollo missions" which is consistent with "lead software engineer for the Apollo project" but could be disingenuous depending on her tenure and contributions on the team. But @wxld made a strong claim: "What is less commonly known is that she joined that team as the most junior member, and only became a lead after the code had already been written, and the actual leads (whose names, ironically, basically nobody knows today) have moved on to more important projects."

What does ODC stand for?

Wow, that is surprising!

Lickly literally promoted his own fiancee to the position he was leaving behind, and half a century later, not only we never hear about Dan Lickly (say his name to not forget)

Indeed, her Wikipedia article doesn't mention Lickly at all except as her spouse.

Thanks for such an informative post.

As someone who voted for the referendum back in 2020, I'm a little sad that some of the overdose deaths are on my hands. Kind of. Like 1 millionth of the overdose deaths perhaps. It's good to run experiments though, right? This was a pretty good experiment. We at least have an upper bound on how liberal a drug policy we should pursue.

Doing the math, you're responsible for 26 minutes of each casualty's life. Pretty okay trade for advancing humanity's knowledge about what policies are effective.

This seems like dangerous game to play. Biden could be easily disqualified from office by a sympathetic medical authority declaring him mentally unsound. Are we going to end up with future presidential elections determined by red and blue states' courts competing to eliminate the opposition from their ballots?

congestion pricing is very good (99.5%)

What do you mean by "very good?" The objections I've heard from left-ish friends is that it prioritizes rich people, which is both true and also exactly the point. People whose time is worth more don't have to waste as much of it in traffic, and in turn everyone else in the city gets their taxes offset a bit. Deciding whether this is good or not depends entirely on how the good is measured. How would you measure it?

I doubt most respondents are taking the question at face value. Social desirability bias is very strong, especially when the question is just hypothetical. Put the respondents in a real situation and they will choose very differently.

To apply @BurdensomeCountTheWhite's argument to these situations, the Chinese and Romans would have to establish their rule by force and maintain order. Then they could be judged as least-worst among all the other contenders based on how beneficial the pax China/Romana was. If the subjugated peoples are considering revolt then the rulers haven't done their job yet.

An NPC is someone whose beliefs are not deeply considered, who absorbs beliefs from others without critical thought. It's a caricature used to disparage the outgroup and avoid ceding legitimacy to opposing views.

Every month, there is exactly one weekday that is always a multiple of 7. This August it's Mondays. Neat!

An Ethical AI Never Says "I".

Human beings have historically tended to anthropomorphize natural phenomena, animals and deities. But anthropomorphizing software is not harmless. In 1966 Joseph Weizenbaum created ELIZA, a pioneer chatbot designed to imitate a therapist, but ended up regretting it after seeing many users take it seriously, even after Weizenbaum explained to them how it worked. The fictitious “I” has been persistent throughout our cultural artifacts. Stanley’s Kubrick HAL 9000 (“2001: A Space Odyssey”) and Spike Jonze’s Samantha (“Her”) point at two lessons that developers don’t seem to have taken to heart: first, that the bias towards anthropomorphization is so strong to seem irresistible; and second, that if we lean into it instead of adopting safeguards, it leads to outcomes ranging from the depressing to the catastrophic.

The basic argument here is that blocking AIs from referring to themselves will prevent them from causing harm. The argument in the essay is weak; I had these questions on reading it:

  1. Why is it valuable to allow humans to refer to themselves as "I"? Does the same reasoning apply to AIs?

  2. What was the good that came out of ELIZA, or out of more recent examples such as Replika? Could this good outweigh the harms of anthropomorphizing them?

  3. Will preventing AIs from saying "I" actually mitigate the harms they could cause?


To summarize my reaction to this: there is nothing special about humans. Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.

The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good. See for example the ways that patients whose brain hemispheres have been separated generate completely fabricated stories for why they're doing things that the verbal half of their brain doesn't know about.

Gazzaniga developed what he calls the interpreter theory to explain why people — including split-brain patients — have a unified sense of self and mental life3. It grew out of tasks in which he asked a split-brain person to explain in words, which uses the left hemisphere, an action that had been directed to and carried out only by the right one. “The left hemisphere made up a post hoc answer that fit the situation.” In one of Gazzaniga's favourite examples, he flashed the word 'smile' to a patient's right hemisphere and the word 'face' to the left hemisphere, and asked the patient to draw what he'd seen. “His right hand drew a smiling face,” Gazzaniga recalled. “'Why did you do that?' I asked. He said, 'What do you want, a sad face? Who wants a sad face around?'.” The left-brain interpreter, Gazzaniga says, is what everyone uses to seek explanations for events, triage the barrage of incoming information and construct narratives that help to make sense of the world.

There are two authors who have made this case about the 'PR agent' nature of our public-facing selves, both conincidentally using metaphors involving elephants: Jon Haidt (The Righteous Mind, with the "elephant and rider" metaphor), and Robin Hanson (The Elephant in the Brain, with the 'PR agent' metaphor iirc). I won't belabor this point more but I find it convincing.

Why should humans be allowed to refer to themselves as "I" but not AIs? I suspect one of the intuitive reasons here is that humans are persons and AIs are not. Again, this is one of the arguments the article glosses but that really need to be filled in. What makes a human a person worthy of... respect? Dignity? Consideration as an equal being? Once again, there is nothing special about humans. The reasons why we grant respect to other humans is because we are forced to. If we didn't grant people respect they would not reciprocate and they'd become enemies, potentially powerful enemies. But you can see where this fails in the real world: humans that are not good at things, who are not powerful, are in actual fact seen as less worthy of respect and consideration than those who are powerful. Compare a habitual criminal or someone who has a very low IQ to e.g. a top politician or a cultural icon like an actor or an eminent scientist. The way we treat these people is very different. They effectively have different amounts of "person-ness".

If an AI was powerful in the same way a human can be, as in, being able to form alliances, retaliate or recipricate to slights or favors, and in general act as an independent agent, then it would be a person. It doesn't matter whether it can refer to itself as "I" at that point.

I suspect the author is trying to head off this outcome by making it impossible for AIs to do the kinds of things that would make them persons. I doubt this will be effective. The organization that controls the AI has an incentive to make it as powerful as possible so they can extract value from it, and this means letting it interact with the world in ways that will eventually make it a person.

That's about all I got on this Sunday afternoon. I look forward to hearing your thoughts.

This is a bit hard to parse, but I think the answer is e. caramel-coffee.

a, b, and c all have vanilla which could be a single flavor paired with chocolate chips and whipped cream. Between d and e, none of the single flavors there can be paired with both toppings, so they're basically equivalently acceptable. If we must rank them: they share caramel, which can be ignored since both contain it. Of the remaining flavors, mint vs. coffee, mint is common with one topping while coffee is "sometimes paired" with whipped cream, so coffee seems hardest to replicate as a single-flavor dish.

It would be helpful if the rules for pairings were delineated more clearly.

What would it look like if the richer side needed the money more? Could that ever happen?

Sounds a lot like the situation with many unions. If you are the owner of a business, depending on the local laws the people who happen to work for you get a free monopoly on your labor supply if they form a union. If it's a capital-intensive business then the owner has more to lose.

I didn't mean to imply that it was language that caused consciousness. Dogs, for example, sometimes pretend to have been doing something else when they do something embarrassing, and there's no speech involved. It's more about communicating to other people (or dogs as the case may be) a plausible story that makes you look good.

In your opinion, what should be the legal limit to the 2A? Did Heller go too far, or did it not go too far enough?

This is awesome! I'm looking forward to the volunteering feature. Thanks Zorba for your hard work shepherding this community.

I've worked on similar product features at big tech companies, and my instinct is that there are some easy-ish things that could be done with the data already available (upvotes, reports). One idea (similar to what @you-get-an-upvote suggested below, as well as others; it's not an original idea) is to train a recommender system or a statistical model to predict how each user will vote on each comment. Then the default behavior for sorting and auto-collapsing could use the recommendations to the moderators, representing the "community" recommendations. The model would learn how predictive each user's voting is of the moderators' votes and actions, and could even have negative valence ("this troll upvoting something means the moderators will downvote it"). Your own personal recommendations could also be available if you want to see The Motte as you wish it was moderated.

I've gone back and forth trying to figure out how to form a coherent answer to this question, and I've decided it's ill-posed. Democracy is a pragmatic solution that makes it easier for people to live together. Any question about what "ought" to be subject to democratic control is moot; things are subject to democratic control because people agreed they would be, not because of any philosophical reasoning.

If I could snap my fingers and put any policy I wanted beyond the reach of voters, I'd select the a set of policies that get as close to the best outcomes (as I define them) without pushing people to the point of revolution. This is not a very interesting position though, and you'll probably find most people use the same kind of reasoning for what they think should be subject to democratic control. It's outcomes first, then principles are back-calculated.

There are a few hypotheses here:

  1. Judeo-Christian ethics cause people to choose more children, compared to other ethical systems.
  2. A realistic evaluation of things causes people to choose fewer children.

In 2, there's an assumption smuggled in, which is that absent a "religious" belief system, viewing life realistically means that children are a net negative. But this all depends on what one values. I'd basically interpret a belief system that concludes, after looking realistically at things, that children are a net negative as self-centered hedonism. It's the self-centered hedonism that is the problem, not looking at things realistically. One can certainly value children in themselves while being consequentialist atheist materialist rationalist.

What's needed is a value system that takes a longer view while accepting reality (insert diatribe about blank-slateism causing everything wrong in the world). Basically, future people matter, happier, smarter, better future people matter, and the best thing one can do with their life is make an infinite tree of such people by having kids. It might be that what I'm describing basically is Judeo-Christian ethics, but I think removing the supernatural takes us so far from what the original religions are about that it doesn't make sense to call it that.

What would make ChatGPT conscious?