@fmaa's banner p

fmaa


				

				

				
0 followers   follows 0 users  
joined 2022 September 17 17:51:56 UTC

				

User ID: 1241

fmaa


				
				
				

				
0 followers   follows 0 users   joined 2022 September 17 17:51:56 UTC

					

No bio...


					

User ID: 1241

Not only is is such a calculation possible, it is unavoidable. Most moral goods have a material cost, and basically no human picks the maximally moral side of that tradeoff. Though it is the relative, not absolute cost that matters for people, which is why the world is getting so much better. The marginal cost of saving a human life anywhere on Earth is about 5k these days, which is orders of magnitude higher than when people starting tracking things like this, as modern abundance has allowed people to actually work at saving lives on a massive scale, picking all the low-hanging fruit.

And in general your framing of the examples is exactly backwards. Increased wealth is what allows for the luxury of moral good. For example, evidently the cost of abolishing slavery (including serfdom) is too big for a pre-industrial society.

Not who you asked, but the sails are good as a skyline piece while the bottom layers of the SOH are a boring block of concrete, so the whole thing looks better from a distance than up close. And looking at the works of Hadid, she reasonably exemplifies the trend of inhuman boxes and blobs of weird shapes which look interesting for 5 seconds but seem in no way to be awe-inspiring, pleasant to be around or part of a beautiful skyline.

Really, this is the essence of the issue for me - a lot of Architecture from the previous century feels designed to impress people looking at it for 5s-5min. Wow, what an unusual design. But buildings are going to be the home or workplace of many people, and part of many more people's daily commute or view. I'd pick any place from the front page of that subreddit over any of Hadid's buildings for this.

The last argument is one for corporal punishment over prisons, not for mob justice.

And the big issue with mob justice isn't that thieves get beaten up, it's that sometimes the person getting beat up didn't actually do anything except be an outsider and look funny. Or more generally, that the less formal the mechanisms of justice, the more they become about social standing. India does keep popping up in international news about various gangrapes and coverups thereof because the rapists are friendly with/members of the police, which is enabled by the same mechanisms that enable your beatings.

They would laugh at him for getting caught, and not taking the punishment stoically enough, and then go right back to being his friends and bullying the previous victim/the snitch. Friends laughing at each other does not make them lower status amongst themselves.

I think changing the setting has to be the start. That some teens are abusive dicks is one thing. That you (and I) felt obligated to go back everyday to the place where you are regularly abused, to sit in forced confinement with people you hate, is insane.

This is a terrible solution. A punishment can't really be embarrassing unless the one doing the punishing is higher status, and I don't think bullies generally respect teachers. A teacher spanking a bully wouldn't lead to him being bullied by his former friends, it would lead to him and his friends beating up the previous victim for snitching to outsider authority.

https://ourworldindata.org/ethnographic-and-archaeological-evidence-on-violent-deaths Proportionally less than in any previous century from violence. Or in early childhood, thanks to modern medicine. Or from starvation, thanks to industrial fertilizers. As for happiness, Ted might have had a better point if he went for the invention of agriculture. But pre-industrial agricultural society meant that the vast majority of humanity were subsistence farmers subject to frequent violence.

The linked essay is so extremely shoddy that I'm not sure who you imagine would be swayed by it, at least in your direction. And the last paragraph just reminds me of Asimov's relativity of wrong.

LED lights should still go in the 'against' column, as their significant advantages could have easily lead to adoption even without regulation. And unlike your positive examples, the primary cost of incandescents wasn't external.

Yes, psychedelics are consistent with the soul-radio model. Dissociatives and deliriants seem a lot more like the sort of brain damage that's evidence against it. The different consciousness part was mostly a joke.

I was going off what I remembered of Scott's review. Rereading it now, my memory of it was wrong, but it seems not very relevant to this conversation. Quoting the review,

I think he is unaware of (or avoiding) what we would call “the hard problem of consciousness”, and focusing on consciousness entirely as a sort of “global workspace” where many parts of the mind come together and have access to one another. In his theory, that didn’t happen – the mental processing happened and announced itself to the human listener as a divine voice, without the human being aware of the intermediate steps. I can see how “consciousness” is one possible term for this area, if you didn’t mind confusing a lot of people. But seriously, just say “theory of mind”.

But this thread is entirely about the hard problem.

If messing around with the radio makes it output an entirely different program, one would suspect that it was actually generating, not receiving a signal. (Or changed which signal it receives. Brain damage tunes your body to a different soul/consciousness is an option.)

As far the popular view of consciousness as mostly providing a narrative/excuses for subconscious processes (of which Jaynes' feels like a variation, where the narrative historically wasn't conceptualized as "I" and didn't have to have a single narrator), I feel like that would only more strongly suggest that it is inherently embodied.

The strongest evidence is probably the way in which various forms of brain damage change aspects of personality, in a manner that would be very odd under a soul-radio model of the brain.

Evidence that it happens in the brain doesn't really make it much less mysterious though.

I think the atheist/agnostic distinction (outside of models where an explicitly unknowable god has meaning) is special pleading. Denying the existence of god, unicorns, Santa, Russel's teapot and Sagan's dragon are all in the same category of statement. But only one of them is so frequently met with "well, technically, you're only saying they probably don't exist".

With community servers, there were plenty of cheating horror stories of a different kind - people excusing/turning a blind eye to the cheating of a popular member of the community, or people turning to cheats to keep up when they care more about the community than the game.

Though I guess they were still rare enough to be stories, instead of business of usual.

I'd like to be able to say, for instance "computer, match me with someone who is a about equal to my my baseline skill level," or "computer, match me with someone who will push me to my limits" or "computer, I don't really care who I match with today."

This can achieved by having multiple accounts and switching based on your level of intensity. It is often frowned upon because it can be easily abused, of course. The computer can't really tell whether you're honest or just asking for the first one but actually planning to go all out and stomp people worse than you.

Even with the assistant models, you can probably do a lot of obfuscation by bothering to add instructions to write your screeds "as written by an X" or "in the style of author Y" etc.

My point is, if you prefer that level of challenge, but will pull out all the stops if challenged more, you're explicitly saying that you'd prefer to not be matched against yourself, but someone worse. The way to get that level of challenge more consistently then is to not tryhard when challenged more, but take the loss and drop in ranking.

The beauty of going off pure Elo is that it doesn't care why you're performing at your level. Whether you're talented but goofing around, or terrible but trying your best, all it does is match you against someone performing at the level you've recently been performing at.

Yeah. But we can surely design algorithms that consider ELO but also consider, I guess, the fact that ELO doesn't capture all the factors that might go into the outcome. In some games, weird random factors can impact who wins, or certain particularly cheesy strategies work really well unless you specifically counter them.

Elo is that algorithm. Trueskill if you want to add in the factor of uncertainty. As a metric, it doesn't matter if your rank jumps around +-100 due to random factors, they'll even out in the long run and are not a precise enough measure that these jumps matter that much. Any consistent change will still only come with an actual change in player skill. It is only as a target where these random jumps leading to losing 200 from where people feel the 'deserve' to be (which is nearly always their peak, though of course the peak is also a random jump ahead) are a problem.

Cheesy strategies are a separate design issue, a match can be perfectly balanced 50/50 but be a boring blowout either way depending on random rock-paper-scissors.

The problem, insofar as there is one, is that Elo is a metric, not a target. And like all metrics measuring things people want, it immediately gets treated like a target. Which doesn't really make it lose its value as a metric because it's very hard to fake, but it does make people miserable.

Past the first handful of games in a Trueskill style system (where your rating has both an expected value and a variance, and when the system has little knowledge of your skill the variance is huge), the yo-yo effect isn't very real. If the matchmaker is actually matching people close in ranking (a big issue in many games is prioritizing queue times over match quality), it should take extremely improbable streaks to get matched significantly outside your skill range. Realistically, the gap between you at your peak and you on an off day is much bigger than random Elo fluctuations. It's just that when you're treating ranking as a target, not a metric, any random upswings feel like long deserved gains and random downswings feel like the matchmaker is out to get you.

I'd like to play against people whom I feel challenge me when I'm playing at my general 'best' without exerting myself to try to keep pace.

Your level of exertion can't help but be factored into you ranking. If you do ever exert yourself to keep pace, you'll be ranked higher than your general 'best' without exertion. So you're explicitly asking to be matched against people worse than you.

I think the one IP based aspect of the RTS template that easily could have converged towards very different designs is the focus on in-map resource extraction and economy. You can't make a Dune game without harvesters, but you can easily make a strategy game without actively managed economy units and harassing thereof. And many successful RTS's from this century have abandoned this aspect.

Fully outside the context of conspiracy theories, recent history and existing upcoming plans on bans vs pricing in externalities hasn't been promising. Or even just letting people judge internal costs themselves, in the case of incandescent lights.

And I'm not even sure to what extent the pushback comes from elites, versus populists. Some brands of populist like bans because many people read 'pricing in externalites' as letting rich people keep sinning while the masses suffer, vs the fairness of everyone suffering.

I get that. I'm saying that the analogy doesn't work because their beliefs and theology are directly linked, unlike the fire and the circus in the analogy. Not relating to the specific theological trappings isn't the fundamental cause of not being taken seriously by nonbelievers. They might be if the analogy is to converting non-catholic christians (back) to catholicism, though.

The first analogy doesn't really work, the fire is entirely separate from the clown's job or attire. Whereas to a modern atheist, the christian faith (among others) is the circus, and there's no fire. Or, if the fire is some modern moral failing, any theological arguments on preventing it are like the clown, instead of breaking character and pleading for help, hoping that if he's just funny enough the people will do what he says. Whether or not you also dress up in millennia of navel gazing is secondary to that base disagreement.

Well, that's mostly the issue of asymmetric games pretending to be symmetric. In better designed games, the asymmetry is explicit and both the advantages and limitations of the AI are built around instead of the game pretending they don't exist.

And the second problem isn't really fundamentally caused by the asymmetry, it's caused by bad victory conditions. Plenty of strategy games become gigantic slogs by endgame when played in multiplayer as well. Which is why nearly all multiplayer matches in Starcraft or Civ end in forfeits.

I'm replying to a post that's wondering why people don't just surrender to save their lives. And the very point of those discussions on LW was that any model of rationality that easily destroyed can't be all that rational. Which is why the last paragraph of my post gives a game-theoretic reason to fight.

But humans very much do not live forever. You are not your genes, your consciousness is their byproduct and will die with your physical body. And your desires are only indirectly linked with genetic success (adaptation executors vs fitness maximizers and all that). Plus, for genetic success it's much better to get other people to die for you instead.

Fairly sure this was not an uncommon topic in the Less Wrong of 10 years ago. How do you have a functional military in your rationalist utopia when it is always rational for the individual to flee and/or surrender? Except if everyone does that, your utopia gets conquered by the nearest group of marginally less 'rational' barbarians.

There's nothing strange or unreasonable about history being full of groups of people willing to risk their lives for the abstract concepts of their group. Because groups without such memes generally don't last long enough to leave a mark on history.

And from a game theory perspective, the credible pre-commitments of MAD are how all military defense functions, really. If you attack us, we commit to fighting a bloody war instead of rolling over. Even though the cost for the defender will be greater, the cost for the attacker will be much greater. And the only way to make that pre-commitment credible is to follow through even after the deterrence has failed. Because it is an iterated game, both from the defenders perspective, assuming they survive, and evolutionarily - "fuck with me and we'll both end up worse" is credible coming from humans because humans have evolved to follow through often enough.