@you-get-an-upvote's banner p

you-get-an-upvote

Hyperbole is bad

1 follower   follows 1 user  
joined 2022 September 04 19:14:33 UTC
Verified Email

				

User ID: 92

you-get-an-upvote

Hyperbole is bad

1 follower   follows 1 user   joined 2022 September 04 19:14:33 UTC

					

No bio...


					

User ID: 92

Verified Email

RE: dwarves, I didn't read that as a forced "woman smart, man dumb" moment. Durin isn't dumb, he's just peeved that his friend hasn't visited him in decades and he's clearly quite open to forgiving him.

I particularly didn't view it as a woke scene, because the wife is employing the soft power women have traditionally wielded in the home, while the "hard" power (i.e. the ultimate decision on whether Elrond should stay or leave) is Durin's. And in any case, it's more like Durin and his wife are playing out their roles so that Durin can both forgive his friend and keep his pride -- the man's clearly open to forgiving Elrond (note how Durin allows Elrond into his home, with predictable results).

The advantage of a web app on mobile is it's easier to access (clicking an app icon is much easier than navigating to a url and/or opening a browser and navigating to the bookmark)

The advantage on a computer is that I can cycle between 5 applications instead of 50 tabs (though some day I hope to burn the "cmd+shift+a" tab-searching hotkey into my mind).

To me, an obvious similarity between EA and wokeism is that they both function as substitutes for religion, giving structure and meaning to individuals who might otherwise find themselves floating in the nihilistic void. Sacrifice yourself for LGBT, sacrifice yourself for Jesus, sacrifice yourself for malaria nets - it's all the same story at the end of the day

This dilutes religion into "a system that asks you to be altruistic". Is virtue ethics a religion because it asks you to sacrifice for virtue (e.g. you're not allowed to cheat on your SAT)? If you want to criticize EA for suckering people into being selfless, you've gotta extend that umbrella quite a bit! Unfortunately "EA and wokeism are really similar" looks a lot less profound when you say "and so is Kantism and Christianity and..."

I think you argue that EA/Wokeism demand a level of selflessness that makes them an outlier, but this isn't really true -- Jesus literally asks you to give up all your belongings. You might say the difference is that EA/woke people actually follow that directive to a level that is unhealthy... but then, there are plenty of other people who do the same -- where's your critique of the nuns who spend their entire lives serving the church? Or the Buddhist monks who live only off of whatever meager food is donated to them? Both are practicing their own kind of virtue at a heavy personal cost.

My main problem with Wokeism is that it really struggles to answer whether it actually delivers what it promises to. A Buddhist monk, a nun, and an EA (as far as I know) have a good sense of what they're getting into and what they'll get from it. In contrast, the effectiveness of woke policies on actually improving the wellbeing of the disadvantaged (what its adherents actually want) runs the entire gauntlet from effective to counter productive, while cultivating a culture that has no qualms about deliberately misrepresenting the empirics.

(And yes, "AI safety" arguably runs into similar problems, but (1) people in EA are very aware of this and (2) most EA is not AI safety (note that you specifically critique malaria nets, which are very transparent about "what you get")).

I honestly see where you're coming from, but I personally am willing to carve out some space for supernatural beliefs. I may think the nun is wrong about going to heaven, but I think the Catholic leadership is just as wrong, so I'm satisfied that at least the intentions of all involved are honest. And insofar as non-supernatural aspects of the "deal" are concerned, "hey, want to meditate 10 hours a day and eat beans? You'll feel fulfilled" seems fairly truthful.

In contrast "hey, I found this empirical evidence that reparations probably wouldn't help black people after one or two generations" is met with "thanks for sharing this with me, but it's racist to post this publicly" is nakedly betraying the trust of people who are trying to be altruistic.

I pointed this out years ago: there are basically two natural philosophies when it comes to avoiding bias: either it's sufficient to (e.g.) mask race as an explicit input, or it's necessary to ensure the predictions treat all races identically. If you're against both forms but keep talking about how a system isn't "fair", you need to take a long look at what, exactly, you mean by "fairness".

Also this goes well beyond machine learning -- policing, insurance, etc. A lot of political issues aren't even at the point where productive discussion is possible, because nobody is willing to state unequivocally what they mean by "fair".

I'm not sure I really believe "self censoring" is a coherent concept. For example, if you're predicting the probability somebody will default on a loan and white people have lower credit scores than asians people, is any model that notices assigns asians a lower probability (on average) not "self censoring"? If 99% of people who eat sushi default on their loans, is it "unfair" to penalize them?

To be frank, I've never heard a definition of "ignores race" that isn't implicitly asking for just nakedly pretending all races have equal odds of defaulting on loans, regardless of whether that's accurate or not. I think people are actually asking for this "post-hoc fairness" should be explicit about that. Instead when I say "so you're saying we should just retroactively make the outputs of the result less accurate and pretend men and women are equally likely to commit murder", the response I get is "you're just straw-manning me, I just want a model that's fair".

It's frustrating that the complaints about bias seem contradictory and/or unsatisfiable, and the people making them are unwilling or unable to elaborate. If somebody is going to criticize my model, they should give me a well-defined notion of fairness that it's actually possible for my model to meet. If their definition of fairness means "deliberately cripple your model and force banks to give out loans that are unprofitable" they need to actually own that instead of hiding behind ambiguity.

Edit: The "post hoc" solution (which I think is the only solution that meaningfully satisfies progressive demands) is:

  1. train a model that uses every variable

  2. train a model that only uses variables you want to ignore (race, sex, etc.)

  3. your predictions are model1(x) - model2(x)

Apparently actual progressives disagree, but I've never heard anyone give an alternative.

The definitions you assign things should not change your understanding of the universe (i.e. the map is not the territory). "Intelligence is ..." should not underpin one's thinking in the same way that deciding whether electrons are negatively or positively charged does not meaningfully change our understanding of physics.

I have come across two fairly distinct methods of teaching programming. I would classify them as 'trying to impart intuition' vs. 'trying to impart knowledge.'

The former consists of teaching via gamified methods where students are made to play elaborate games consisting of programming puzzles, modify existing code to draw out 2-d shapes and animations, etc. Once students develop some familiarity with changing words in a file to make the computer do something, they are introduced to data types, data structures, control flow, etc.

The latter is a more 'rigorous' approach where students are taught the fundamentals such as data types, structures, flow, interpreter vs compiler, etc first; Then they are made to write programs. These programs are sometimes gamified but not to the extent as the former.

I consider the latter "imparting knowledge" method superior. It's more in line with all the hard sciences I have been taught and all the good programmers I am aware of claim to have been taught using this method. More on this later.

I want to push back on your last point.

I went to a university that was top 20 on this list and was one of the strongest programmers of my year and am currently working at a FAANG. My introduction to programming was in high school and was mostly self-taught -- I took two programming courses in high school but they were primarily HTML-oriented. Before college, I was never explicitly taught about if-statements or for-loops (though I had been using them for ~3 years) and didn't know asymptotic notation, heaps, hash tables, etc.

Then I got to college and voraciously consumed all the "formal" CS education. So to me, my education was primarily the former method and "I turned out fine". (Though I certainly would have appreciated getting a formal CS education a year or two earlier, and certainly rigorous education is necessary at some point).

The main caveat is that "move this frog around the screen to eat bugs" is kind of a pathetic attempt to gameify education and I would have hated it. Working on my own projects and actually achieving them made programming basically seem magical, and (imo) that's what made me super excited to learn the formal stuff when I got to college.

I don't think "more" or "less" propositions like that really answer this question.

If you believe there is no such thing as lucky people

A world with where some people are luckier than others would be like this world but with more wealth disparity

If you believe

If you believe there is no such thing as luck

A world with where some people are luckier than others would be like this world but with less wealth disparity

This tells you nothing about what predictions these models are making that are coming (or failing to come) true.

Could somebody please convince me that "80% +-20% chance" is a coherent thought.

If I say "there's a 10% ± 5% chance AAPL will increase by $100 today" and you say "there's a 10% chance ± 30% chance AAPL will increase by $100 today" and then it happens, who do you trust more?

"Inting" is "intentionally feeding" (i.e. intentionally dying to the enemy team to give them gold). Though I've also heard "inting" used to refer to intentional game-throwing of any kind.

Also fwiw "opening" is when you stop defending and let the enemy invade your base uncontested ("open the gates", "leave your lane open", etc).

In a more sane universe, match making would take into account your past behavior and everyone would "get the teammates they deserve".

For what it's worth, I remember learning about this when taking AP government -- that the ability of interest groups to coordinate people result in policies that benefit the few will diffusing the costs on the many.

Farmers care a lot more about getting farmer subsidies than you or I care about paying for our share of them, because the difference in income for the farmers is orders of magnitude larger than the difference in income to your or me. Of course having lots of nakedly rent-seeking interest groups is awful, and one of the values of economics as a field is in providing reasonable criteria for when a policy deserves to be implemented (solving negative externalities, reducing deadweight loss, etc.).

I'm not sure your notion of "default" is really relevant. Either not getting vaccinated causes a negative externality or getting vaccinated causes a positive externality.

The question of whether you want to internalize that by paying people to be vaccinated or levying fines on people for not being vaccinated is, theoretically at least, a minor implementation detail.

Claiming that the government insisting on vaccinations causes negative externalities is a bit weird since the government is typically seen as optimizing for social welfare, and so should already be accounting for the benefits and costs to citizens.

You can argue that a vaccine mandate is bad policy (in which case by all means argue that), but using the word "externality" doesn't absolve you of the requirement to actually argue that.

Your "maximized for emotional salience" Christianity also explicitly asks people to give up all their worldly positions and follow Christ, an instruction virtually zero Christians actually follow.

And while it's true Christianity spread better than EA, the comparison seems really unfair to compare one arbitrary idea to the most popular religion after it succeeded. Would you have predicted Christianity would become so ridiculously popular in 50 AD?

I think a fairer comparison is looking at the top 1% of sects founded ~2000 years ago, and I suspect EA performs ridiculously well by this metric.

You can't delegate morality to mathematics. All it leads to is... the 'freedom' to always be able to justify your own behaviour to yourself, even when your actions are those of a base criminal.

You can't delegate morality to non mathematics either for the same reasons, whether it's telling an axe murderer where your friend is, the numerous contradictory things Christians are taught, or reporting your own family to Mao to preserve Communism.

Proponents of every moral system can justify plenty of terrible things. This has nothing to do with mathematics which constrains the types of arguments you can make. That's not to say mathematics solves the problem, but it's really hard to believe it exacerbates. Jesus literally says things like

Go sell your possessions; give everything to the poor. All your wealth will then be in heaven. Then come follow me.

It is easier for a camel to pass through the eye of a needle, than for a rich man to enter into the kingdom of heaven

And virtually every Christian finds it easy to ignore Him (He's not really being ambiguous here...).

People justifying whatever is convenient isn't a problem with a particular moral system, it's a problem with people.

@vpn isn't complaining that iPhones are worse, he's complaining that they're treated as status symbols (e.g. my brother's gf suggested I'd have better luck dating if I bought an iPhone) which doesn't make a whole lot of sense from a "costly signal" perspective.

Leave the rest of the internet at the door

Somebody, somewhere, resorting to dishonest rhetoric is not a license to retaliate here.

Maybe if a women feel rejected by or reject their own tribe themselves, they attempt to undermine it in the hopes of getting conquered by a different tribe? That seems overly complicated though, the answer to this should feel simple because it's emotional. Help me out here.

If you actually believe that then Proactively provide evidence in proportion to how partisan and inflammatory your claim might be. Otherwise consider being kind and charitable rather than tarring an entire group.

We have social norms and rules against saying shitty things about groups because there are costs to saying them. I like diverse opinions and I like evidence-based arguments, and these are ostensibly the goal of this site. People going on bitter tirades against women and leftists and PMCs goes against that goal.

If you want to say shitty things about people you should bring evidence. If you have no evidence then don't say shitty things.

It's true that this makes it more difficult to levy true accusations. But the point of these norms is to craft asymmetric weapons to guide us to the truth, and part of that is having standards before you're allowed to say that black people are inherently violent or that Jews secretly want to sacrifice Christian girls to the devil or that women secretly want to be conquered by the Turks.

I don't have a problem with reporting other people's opinions. It's saying "it's okay for me to portray my side in the most slanted terms possible here because I saw my opponents do it on Twitter" that I object to.

I have nothing against talking about other people. I'm against using their behavior elsewhere as an excuse to be rhetorically dishonest here.

FWIW, across /r/slatestarcodex, /r/TheMotte, and /r/theschism

  1. The top 20 posters authored 14% of comments.

  2. The top 100 posters authored 35% of comments.

  3. The top 200 posters authored 48% of comments.

  4. The top 1000 posters authored 79% of comments.

  5. The top 2000 posters authored 89% of comments.

Source

Obviously this says nothing about the quality of the content.

CNBC is reporting that

Although the final official figures have not been released, it is estimated that the Qatar World Cup will cost around 220 billion to 300 billion dollars. This will also make it the most expensive World Cup ever.

Notably

At this year’s World Cup in Qatar, it’s noteworthy that seven of the eight stadiums have been constructed recently. Only one was renovated. Lusail Stadium alone, located north of Doha, cost $45 billion. Additionally, about 20,000 new hotel rooms have been constructed, as well as new driveways.

The USA's GDP is $20 trillion, so a cost of $220 billion is about 1% of the USA's GDP, which should immediately raise some red flags. Now that something smells fishy, we can look up Qatar's GDP and realize that CNBC is claiming that Qatar has spent at least 120% of their GDP on the World Cup.

Unsurprisingly, we can count on Reuters to set the record straight.

Gas-rich Qatar, in an attempt to emulate the dramatic transformation of Gulf rivals Dubai and Abu Dhabi, has spent at least $229 billion on infrastructure in the 11 years since winning the bid to host the World Cup.

Some Googling also gets us news.sky.com being more explicit

Qatar maintains that, while much of the infrastructure included in the $200bn figure will be used during the tournament, its construction would have taken place regardless of whether the cup was being held there, so it should not be viewed as the total cost.

I think it's fair to say that reporting all infrastructure spending in the last decade as "spending on the World Cup infrastructure" is pants-on-fire misleading, particularly when comparing it to the costs other countries paid.

The New York Times does it with a bit more plausible deniability.

For the country of three million people, the monthlong tournament is the culmination of 12 years of preparation and more than $200 billion in infrastructure spending, subsumed into a grand nation-building project for a state the size of Connecticut surrounded by more powerful neighbors.

Although their tweet is more suspect

Qatar opened its long-awaited World Cup with a 2-0 loss to Ecuador on Sunday, a disappointing start to an event that had required more than a decade of planning; $200 billion in investments; and countless uncomfortable questions about human rights.

But what about that $45 billion stadium (which, incidentally, is greater than Qatar's annual government spending)?

That also seems implausible. No source is cited but, for comparison, Yankee Stadium has 12,000 seats (vs Lusail's 9,000) and cost $2.3 billion (or 5% the alleged cost of Lusail). Also sportingnews.com, while also guilty of running that $220 billion number, helpfully lists the costs of each of the new stadiums and claims Lusail Stadium cost $767 million.

Rough order-of-magnitude verification of numbers is a valuable skill. If I tell you the deepest part of the ocean is 500 miles deep you should really be able to know that I'm wrong (the USA is ~2500 miles wide). You should know if the government spends $1 billion or $50 million on something. Reporting that a country spent $200 billion on a sporting event instantly raise a red flag.

Go play Wits & Wagers.