ChestertonsMeme
blocking the federal fist
No bio...
User ID: 1098
What would it look like if the richer side needed the money more? Could that ever happen?
Sounds a lot like the situation with many unions. If you are the owner of a business, depending on the local laws the people who happen to work for you get a free monopoly on your labor supply if they form a union. If it's a capital-intensive business then the owner has more to lose.
I didn't mean to imply that it was language that caused consciousness. Dogs, for example, sometimes pretend to have been doing something else when they do something embarrassing, and there's no speech involved. It's more about communicating to other people (or dogs as the case may be) a plausible story that makes you look good.
In your opinion, what should be the legal limit to the 2A? Did Heller go too far, or did it not go too far enough?
This is awesome! I'm looking forward to the volunteering feature. Thanks Zorba for your hard work shepherding this community.
I've worked on similar product features at big tech companies, and my instinct is that there are some easy-ish things that could be done with the data already available (upvotes, reports). One idea (similar to what @you-get-an-upvote suggested below, as well as others; it's not an original idea) is to train a recommender system or a statistical model to predict how each user will vote on each comment. Then the default behavior for sorting and auto-collapsing could use the recommendations to the moderators, representing the "community" recommendations. The model would learn how predictive each user's voting is of the moderators' votes and actions, and could even have negative valence ("this troll upvoting something means the moderators will downvote it"). Your own personal recommendations could also be available if you want to see The Motte as you wish it was moderated.
I've gone back and forth trying to figure out how to form a coherent answer to this question, and I've decided it's ill-posed. Democracy is a pragmatic solution that makes it easier for people to live together. Any question about what "ought" to be subject to democratic control is moot; things are subject to democratic control because people agreed they would be, not because of any philosophical reasoning.
If I could snap my fingers and put any policy I wanted beyond the reach of voters, I'd select the a set of policies that get as close to the best outcomes (as I define them) without pushing people to the point of revolution. This is not a very interesting position though, and you'll probably find most people use the same kind of reasoning for what they think should be subject to democratic control. It's outcomes first, then principles are back-calculated.
AP is reporting on it now: Israel attacks Iran’s capital with explosions booming across Tehran
There's a difference between consequences from the state and consequences from private actors. The jail term is just the least-common-denominator solution society has agreed on for punishing his crime. Any private person can also form their own independent opinion of what consequences he should face, and share their opinion.
From the perspective of private actors, it is deeply unfair to expect them to treat someone who has served a sentence for a crime the same as someone who never committed the crime. Clearly the fact that someone committed a crime predicts their future behavior in a Bayesian sense. People should be allowed to use that information to inform how they treat the perpetrator. Imagine the state, for reasons, fines criminals just $1 for committing, say, date rape. This is the right balance of deterrence, justice, incapacitation, and bureaucracy that meets the state's needs. If you're a woman considering having a drink with a man who's paid out $200 in such fines over the past year, you should be allowed to know and to act on the man's criminal history! Your own judgment of the severity of his crime can be wildly different from the state's.
However, I also believe in rehabilitation. I see no reason to report on this any more than if he had served a year for insurance fraud in 2016.
I assume that any competitive male athlete has a higher level of sexual aggression than average, so this article doesn't shift my judgment of him by much. But it's reasonable for other people to get value out of learning this part of his history. It's also reasonable to want to strike fear in the hearts of future statutory rapists to prevent them from acting. So I can't condemn this article; people have a right to know.
To apply @BurdensomeCountTheWhite's argument to these situations, the Chinese and Romans would have to establish their rule by force and maintain order. Then they could be judged as least-worst among all the other contenders based on how beneficial the pax China/Romana was. If the subjugated peoples are considering revolt then the rulers haven't done their job yet.
Every month, there is exactly one weekday that is always a multiple of 7. This August it's Mondays. Neat!
Human beings have historically tended to anthropomorphize natural phenomena, animals and deities. But anthropomorphizing software is not harmless. In 1966 Joseph Weizenbaum created ELIZA, a pioneer chatbot designed to imitate a therapist, but ended up regretting it after seeing many users take it seriously, even after Weizenbaum explained to them how it worked. The fictitious “I” has been persistent throughout our cultural artifacts. Stanley’s Kubrick HAL 9000 (“2001: A Space Odyssey”) and Spike Jonze’s Samantha (“Her”) point at two lessons that developers don’t seem to have taken to heart: first, that the bias towards anthropomorphization is so strong to seem irresistible; and second, that if we lean into it instead of adopting safeguards, it leads to outcomes ranging from the depressing to the catastrophic.
The basic argument here is that blocking AIs from referring to themselves will prevent them from causing harm. The argument in the essay is weak; I had these questions on reading it:
-
Why is it valuable to allow humans to refer to themselves as "I"? Does the same reasoning apply to AIs?
-
What was the good that came out of ELIZA, or out of more recent examples such as Replika? Could this good outweigh the harms of anthropomorphizing them?
-
Will preventing AIs from saying "I" actually mitigate the harms they could cause?
To summarize my reaction to this: there is nothing special about humans. Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.
The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good. See for example the ways that patients whose brain hemispheres have been separated generate completely fabricated stories for why they're doing things that the verbal half of their brain doesn't know about.
Gazzaniga developed what he calls the interpreter theory to explain why people — including split-brain patients — have a unified sense of self and mental life3. It grew out of tasks in which he asked a split-brain person to explain in words, which uses the left hemisphere, an action that had been directed to and carried out only by the right one. “The left hemisphere made up a post hoc answer that fit the situation.” In one of Gazzaniga's favourite examples, he flashed the word 'smile' to a patient's right hemisphere and the word 'face' to the left hemisphere, and asked the patient to draw what he'd seen. “His right hand drew a smiling face,” Gazzaniga recalled. “'Why did you do that?' I asked. He said, 'What do you want, a sad face? Who wants a sad face around?'.” The left-brain interpreter, Gazzaniga says, is what everyone uses to seek explanations for events, triage the barrage of incoming information and construct narratives that help to make sense of the world.
There are two authors who have made this case about the 'PR agent' nature of our public-facing selves, both conincidentally using metaphors involving elephants: Jon Haidt (The Righteous Mind, with the "elephant and rider" metaphor), and Robin Hanson (The Elephant in the Brain, with the 'PR agent' metaphor iirc). I won't belabor this point more but I find it convincing.
Why should humans be allowed to refer to themselves as "I" but not AIs? I suspect one of the intuitive reasons here is that humans are persons and AIs are not. Again, this is one of the arguments the article glosses but that really need to be filled in. What makes a human a person worthy of... respect? Dignity? Consideration as an equal being? Once again, there is nothing special about humans. The reasons why we grant respect to other humans is because we are forced to. If we didn't grant people respect they would not reciprocate and they'd become enemies, potentially powerful enemies. But you can see where this fails in the real world: humans that are not good at things, who are not powerful, are in actual fact seen as less worthy of respect and consideration than those who are powerful. Compare a habitual criminal or someone who has a very low IQ to e.g. a top politician or a cultural icon like an actor or an eminent scientist. The way we treat these people is very different. They effectively have different amounts of "person-ness".
If an AI was powerful in the same way a human can be, as in, being able to form alliances, retaliate or recipricate to slights or favors, and in general act as an independent agent, then it would be a person. It doesn't matter whether it can refer to itself as "I" at that point.
I suspect the author is trying to head off this outcome by making it impossible for AIs to do the kinds of things that would make them persons. I doubt this will be effective. The organization that controls the AI has an incentive to make it as powerful as possible so they can extract value from it, and this means letting it interact with the world in ways that will eventually make it a person.
That's about all I got on this Sunday afternoon. I look forward to hearing your thoughts.
The "can't remember the name of their medication" test is a frustratingly close mirror to the Obama administration's 'fiduciary' test, which was quite broadly applied to people whose sole sin was having difficultly dealing with a checkbook.
Could you give some more context on what this is, for those unfamiliar? All I can find is a rule about financial professionals having to act in their clients' best interests.
Secret data but more importantly secret code (any programs, algorithms, statistical techniques, data cleaning, etc.), would never cut it in the professional world. If you're a data scientist or a product manager proposing a change to a company's business processes you need to have your work in source control and reviewable by other people. There's no reason academics can't do the same. Make the PI responsible by default unless they can show fraud in the work their underling did. If they didn't review their underling's work then the PI is fully responsible. This would have the added benefit that researchers would learn useful skills (how to present work for review) for working in industry.
one has to register as Democrat or Republican to be able to vote in the primaries? Is that open information ?
It is, at least in my state. Keep in mind that people sometimes register in one party to influence the primary, then vote for the other party in the general election. So you can't tell someone's true allegiance just by seeing which party they're registered under.
Glenn Loury and John McWhorter discuss this on a recent podcast, motivated by the recent example of Ibram X. Kendi’s waning influence:
- NYTimes: Ibram X. Kendi Faces a Reckoning of His Own
- Washington Examiner: Ibram X. Kendi’s intellectual implosion (up for four days then deleted, make of that what you will)
There definitely is a vibe shift and it feels safer for critics of the social justice movement to speak publicly.
As someone who voted for the referendum back in 2020, I'm a little sad that some of the overdose deaths are on my hands. Kind of. Like 1 millionth of the overdose deaths perhaps. It's good to run experiments though, right? This was a pretty good experiment. We at least have an upper bound on how liberal a drug policy we should pursue.
Doing the math, you're responsible for 26 minutes of each casualty's life. Pretty okay trade for advancing humanity's knowledge about what policies are effective.
This seems like dangerous game to play. Biden could be easily disqualified from office by a sympathetic medical authority declaring him mentally unsound. Are we going to end up with future presidential elections determined by red and blue states' courts competing to eliminate the opposition from their ballots?
congestion pricing is very good (99.5%)
What do you mean by "very good?" The objections I've heard from left-ish friends is that it prioritizes rich people, which is both true and also exactly the point. People whose time is worth more don't have to waste as much of it in traffic, and in turn everyone else in the city gets their taxes offset a bit. Deciding whether this is good or not depends entirely on how the good is measured. How would you measure it?
For HSV-2 in the U.S., the rate varies a lot by race, from 3.8% for Asian to 34.6% (!) for black.
Time, on October 22nd: "Don’t Trust the Political Prediction Markets". Oops.
When it comes to accuracy, these prediction markets have an even poorer historical track record than political polling– not to mention these companies come and go with startling transience.
the reality is that the Circuit Court could well rule that these platforms are illegal and shut them down in merely a few weeks’ time.
Maybe they would last longer if Time wasn't writing hit pieces on them.
I doubt most respondents are taking the question at face value. Social desirability bias is very strong, especially when the question is just hypothetical. Put the respondents in a real situation and they will choose very differently.
This is an interesting analogy and lends itself to more elaboration.
In aviation, there have been autopilots for many years. But always the human pilot is in command, and uses the autopilot as a tool that has to be managed and overseen. Autonomous vehicles, at least in some companies' visions, have no way to control them manually. An airplane pilot enters waypoints into the navigation system to plan out a route; an autonomous car routes itself. The biggest difference is in who is responsible for the vehicle; is it the human operator or the vehicle's manufacturer?
I could see a kind of autonomous vehicle that works more like an airplane autopilot - you wouldn't necessarily need a steering wheel, but if you had control over the different high-level choices in route planning and execution (do I try to make this yellow light? Should I play chicken at this merge or play it safe?) then the human could be considered responsible in a way that a fully autonomous, sit-back-and-relax mode doesn't allow.
I am revolted by the idea of relying on a company akin to an airline for my day-to-day mobility. There are too many failure modes that leave one stuck. What if there's a natural disaster and all the phone networks are down? Or the car company has a de facto local monopoly, but then withdraws from this market or goes out of business? What if the company starts blacklisting customers for things that shouldn't be related to transportation, like their political affiliation or their credit score?
I thought unions were like some kind of trade association where all workers join and they collectively bargain with employers if they want access to their skilled labor pool.
That's what they should be. In reality the NLRB and labor law make them more like a local Mafia. Pay us for protection or we'll destroy your business. This is but one example of why libertarians want less regulation: any government power is always corrupted to enrich whoever can get their hands on it.
What does ODC stand for?
This is a bit hard to parse, but I think the answer ise. caramel-coffee .
It would be helpful if the rules for pairings were delineated more clearly.
More options
Context Copy link