@07mk's banner p

07mk


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

IANAL, but what laws would you expect to be in the books that that would violate?

If I oppose you shooting a bunch of cannon balls at my house, I'm not canceling ballistics, I'm canceling your use of ballistics in this particular way.

This isn't analogous to the actual situation in any way, though. It's more akin to you opposing anyone shooting any cannons at any direction. In which case it'd be fairly appropriate to claim that you are canceling ballistics.

The software that generates rap lyrics is a bunch of maths. More specifically, it's an algorithm that takes some set of inputs (I'm guessing some sets of strings and some random numbers?) and produces some set of outputs (in the form of a string of lyrics). It's a bunch of maths much like how the proof that the square root of 2 is irrational is a bunch of maths. In my hypothetical "what-if" scenario that we are talking about, the "cancelers" have a problem with this bunch of maths being used whatsoever, regardless of the characteristics of the individuals involved in creating or running the rap-generation software. This is accurately described as "canceling a bunch of maths," rather than the social context around the usage of such.

Wakko's America is a good one too.

I'd say it's rather not cynical to consider that "if" to be at all possible.

Famed fitness expert Steven Crowder answered this question for us 7 years ago.

Based on my own limited experience with Disco Elysium, I'd agree that it's not particularly "woke," and whatever political or ideological messaging it had seemed to be in the form of "fictional character has this opinion" rather than "this fictional narrative serves as a lesson for why this opinion is the correct IRL." I'll add, though, in my personal experience was that the only people who ever called the game leftist were leftists praising the game for pushing forward leftist (not necessarily "woke" or progressive) messaging. Such folks were the only reason I'd heard of the game and got interested enough in it to start playing, actually.

however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

Well, the question then becomes what is meant by "accurately captures the experience of human despair in all its facets." Since we still currently lack a true test for consciousness, we don't have a way of actually checking if "all its facets" is truly "all its facets." But perhaps that part doesn't matter; after all, we also have no way of checking if other humans are conscious or can suffer, and all we can do is guess based on their behavior and projecting ourselves onto them. If an AI responds to stimuli in a way that's indistinguishable from a human, then perhaps we ought to err on the side of caution and presume that they're conscious, much like how we treat other humans (as well as animals)?

There's another argument to be made that, because humans aren't perfectly rational creatures, we can't cleanly separate [AI that's indistinguishable from a suffering human] and [being that actually truly suffers], and the way we treat the former will inevitably influence the way we treat the latter. And as such, even if these AI weren't sentient, treating them like the mindless slaves they are would cause humans to become more callous towards the suffering of actual humans. One might say this is another version of the "video games/movies/porn makes people more aggressive IRL" argument, where the way we treat fictional avatars of humans is said to inform and influence the way we treat real humans. When dealing with AI that is literally indistinguishable from a human, I can see this argument having some legs.

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Is the contention that a humanlike AGI would necessarily have subjective experience and/or suffering? Or perhaps that, sans a true test for consciousness, that we ought to err on the side of caution and treat it as if it does have conscious experience if it behaves in a way that appears to have conscious experience (i.e. like a human)?

Chris Hemsworth is a Hollywood star with a physique that's out of reach for almost anyone. Someone with his exact same physique but not his stardom also wouldn't turn nearly as many heads. "Look what they need to mimic just a fraction of our power" is a quotation that comes to mind.

I was in high school in 2001, and the view espoused by @AhhhTheFrench , that we should do nothing and that Bush is playing into the hands of Al Qaeda by attacking Afghanistan (even moreso Iraq) was basically the mainstream consensus where I lived. It certainly wasn't universal, but enough that there was social pressure to conform. I recall thinking at the time that this was just another murder, scaled up by not even 3,000, and thus only criminal proceedings are justified.

Given that my environment wasn't typical, I think you're right that most leftists still would've complained if Bush took the pacifist route, but I think there would still have been quite a bit of support. Minority support, to be sure, though.

To me, one important difference maker is that dead people have no skin in the game. Broadly, one might posit that dead people had a preference that humanity keep surviving and, as such, they could be considered to have some retroactive skin in the game, and as such their votes could be helpful for humanity continuing to survive. However, I'd contend that the actual preference could be described as genuinely believing that one's preferred ideas would lead to humanity surviving after one's death, rather than as actually wanting humanity to keep surviving after one's death. After all, there's no way to check the latter. At best, one can check the trajectory of humanity (or subset that you care about) while one dies and assume that a trajectory that looks good now will look good in the future after one is dead. That's valuable, but also limited. So I think it makes sense to at least discount people's votes based on accident of death, even if we don't automatically disqualify them. If those people's votes lead us towards hell, they're not the ones who will be suffering that hell, and so we can't trust them to weigh the risks of hell creation properly.

What I can't understand is this seemingly pervasive opinion that increasing costs in a market somehow doesn't affect equilibrium quantity. Some people go so far as to say that decreasing costs would lower equilibrium quantity, which is even more absurd.

Aren't there some goods for which lowering their price actually decreases the quantity sold, because it allows people to substitute it with more expensive, higher quality products? However, I've also been told that this was just a hypothetical good speculated by some economists rather than something observed to exist in reality.

But either way, I think almost no one thinks to model this type of thing in terms of supply and demand. There was another comment a couple days back here about why Freddie DeBoer and/or people like him didn't find it obvious that slut shaming was a metaphorical form of unionizing by women in the metaphorical mating market where I think the same phenomenon happens. For some topics, people tend to see as almost supernatural in how they're free from the basic laws of reality, and both crime and love fit into those things. As for why people treat these things as supernatural instead of bound by reality, I think it mostly has to do with how most people, most of the time, including people like me who write on this site, prefer to feel good than to be right. Being right takes hard work, research, skepticism, correcting self-biases, modeling, etc. But it's easy to believe that whatever my side is saying about some controversial topic is correct, and it feels so damned good to do so.

How do you distinguish vigilance from the more pedestrian infighting that comes from zero-sum status games? It’s possible to view every Bernie Bro, every college cancellation, every instance of a snake eating its own tail as the noble policing of establishment tendencies.

This is a fair point, and certainly it's possible to interpret those in that way, but my perspective is that it's usually easy to distinguish between self-vigilance and pedestrian infighting by observing how the status of oneself or one's own preferred ideology would be affected. Which is to say, if you're not pushing in the direction that leaves you more open to having your status lowered, then you're not applying that vigilance to yourself, you're applying that vigilance to someone else.

For instance, with college cancellations, when Middlebury students mobbed Charles Murray and the professor who invited him to give the guest lecture in one of the earlier high profile cases a lifetime ago now, were those students doing so with the belief that, through their actions, they would be challenging own sets of beliefs, i.e. most likely what we call modern social justice, CRT, idpol, "woke," etc.? Perhaps things played out that way in a certain point of view, but I would argue that it's clear that their vigilance was directed at an "other," i.e. the Murrays of the world who have beliefs about scientific inquiry regarding hereditary differences in intelligence that conflict with their own, not at "themselves," i.e. the people who believe that Murray giving a talk in some official college capacity (unrelated to The Bell Curve, IIRC - I think it was about his more recent book Coming Apart?) would cause harm.

To use a made-up example, if Ibram X Kendi came out and said that he's worried about how people buying into his lessons - and not in the "oh they're misinterpreting it and applying it wrong" kind of way - could lead to a tyrannical (perhaps not literally fascistic) society in which, say, individuals are forced to submit to others based purely on what races they belong to, and as such, he's pushing forward research to figure out these potential harms and how to mitigate them, this would appear to be vigilance towards oneself. Arguably, this would raise his status and that of his ideology, but that would be done by changing his ideology to a better one through corrective actions; the unchanged one would lose status as that older ideology that we no longer use, because we have a better, fixed one now.

On the other hand, if Kendi came out and said that he's worried about how the Democratic party isn't taking his scholarship seriously enough and, as such, they could inadvertently allow the latent white supremacy of the party to recreate Jim Crow in 21st century America or the like, that would appear to be obviously infighting between two different parts of the left. If Kendi got his way in this fictional example, the result wouldn't be that his preferred ideology gets attacked, damaged, and rebirthed into a better version of itself, it would be a peer ideology that did that, while his own just gained more status by becoming more influential in a powerful institution.

I do think there must be edge cases, and there's probably no simple binary test to check, but in most cases, it's not all that ambiguous.

There are people right now calling for policies leftists don't like and/or consider racist (restricting immigration, cutting welfare, harsher crime punishment). Some of those people (Murray) have explicitly linked these things to their take on HBD.

Sure, but policies that we don't like or even consider racist is different from "bigotry, racial hatred, or dehumanizing," because we had to expand the definition of "racist" in order to categorize things like "restricting immigration, cutting welfare, harsher crime punishment" within it. So that's just a whole different category of things.

But, even without them, you can easily connect the dots because history didn't start yesterday and none of these arguments are now.

People keep saying this, but every time I see the dots actually connected, I notice that the threads held there by sheer force of will rather than any sort of actual underlying connection.

In terms of political intuitions people continue to hold without strong empirical backing...this doesn't seem that egregious.

As damning of political intuitions as this statement is, it's true. What gets me is that "not egregiously bad in a category of things known for being incredibly bad" is not the standard I want my side to live up to; in fact, I try to make it so that it's only because my side lives up to a higher standard than the other that I choose that side. One of those higher standards is one of epistemology; that the left is more correct than the right because we perceive the world more accurately than the right. Perceiving the world more accurately isn't a matter of believing more true things like "the Earth is closer to 4 billion years than 6,000 years old" but rather about the process by which we discriminate between what is true and what is false. And if we're willing to say that this bit of political intuition is a high enough bar to censor HBD, then that calls into question our epistemic standards in general, which calls into question my belief that ours is actually the better side.

I was a math major, and I don't find this suspect at all. Beyond 100-level courses, college math is primarily about logic, often applied to numbers, but also often applied to non-numbers. It would be entirely reasonable for a full semester of a class not to involve any engagement with numbers beyond the kind of basic everyday stuff, and explicit examples would be irrelevant, because those examples wouldn't involve numbers anyway. A majority of college-level math is writing essays.

However, on that note, I would say that I disagree with the notion that this would make math, or certain types of math, a verbal field. The fact that it's primarily about writing essays doesn't make it verbal, because the essays are based around rigorous rules of logic, which is what makes it a quant field, rather than a verbal field where such rules just don't matter.

I'm hoping this was some sort of high water mark for ultra fast paced filmmaking and everyone will take a deep breath from now on.

Either that, or with generative AI, we're at the cusp of an era that makes the current pace and everything that came before it look sluggish.

I do think the rapid pace at which they pump these out is definitely a factor. Again, I don't understand the mentality of the execs who signed off on this kind of crazy release schedule, where we're getting like 2 or 3 live-action remakes a year, including ones dumped onto streaming and such, and a similar number of Marvel shows. Strike while the iron is hot, get while the getting's good, and all that, but did the idea that audiences have a refractory period between big releases and so finding the right rhythm of releases instead of just flooding them with content just not occur to them? And obviously the quality suffers, with CGI being a well known issue but also the important stuff like writing.

It's like the execs thought their audiences were vending machines, where you just insert latest content and get money back out, and the more content you insert, the more money comes out, with no limits. It doesn't take a genius businessman to know that this isn't how that works, and Disney isn't just a business, it's the top of the top of the top in its industry, the metaphorical New York Yankees of Hollywood. These execs should know enough about business not to treat their audiences like that, if only out of naked selfish interest.

I think it's sarcasm. Would you have accepted a no?

It didn't look like sarcasm to me, but either way, obviously I would have accepted a no. It'd have been surprising, but I wouldn't see any reason for guesswho to lie about that, other than just trolling, I guess. Which could be what's happening with that comment, to be sure.

In terms of comparative analysis, I don't really think there's much of a way to do such a thing with anything approaching objectivity. Maybe ChatGPT, though then the question of validity arises. Again, from my own subjective analysis, the specific bad faith style looks very similar to Darwin and also quite different from the typical ways that a typical person with SJW leanings would make bad faith arguments. I suppose the only way to truly confirm would be to ask Darwin2500 on Reddit and if he says Yes, that'd be beyond a reasonable doubt at that point in my view. If he said No, then clearly there'd be some lying going on somewhere, and I'd probably lean towards it being different people, since Darwin2500, for all his bad faith, tried to stay away from blatant lies in that style and also his trolling tended to be content- and argument-based.

Edit: Also he did refer to me as if he knew me from Reddit, when he accused me of attacking his character on Reddit. Again, this could be a lie, especially since I don't remember attacking his character on Reddit but his belief about what conatitutes an attack on character seems to be different from mine.

I just wish we didn't have to live through this happening in real-time with non-rabid non-reactionaries. It's one of those things that's fun to think about, but hellish to experience.

Hm, good point. I suppose my thinking was that it's higher status to be a husband of a harem than to be a playboy or serial monogamist, but in the brave new world, that's certainly not a safe assumption.

Probably the most prominent example of this recently has been the phrase "from the river to the sea." Some people surely use it with a genocidal intent (there should be no Jews between the "river and the sea") while other use it as an expression of solidarity between the West Bank, Gaza, and non-Jews in Israel more generally.

I must admit I'm pretty ignorant about this phrase and why it's considered genocidal. Getting rid of Israel as a nation and even kicking out all of the Jews from there isn't genocidal, just ethnic cleansing, right? Is the issue that that was the Nazis' initial plan before they got to the Final one, and as such we can round one up to the other? That seems like the slippery slope fallacy (though I'll admit that there is indication that the people descending down the slope are doing so by pouring oil on it rather than by carefully inching down by building steps or something).

But I'd also say that, if it's the case that the phrase is genocidal in nature, then it doesn't really matter if the person saying the slogan is thinking to themselves, "I'm saying this because I really want those Jews murdered" or "I'm saying this because I want to show solidarity between XYZ and literally not an inch more;" the latter is still showing full-throated support for genocide, and their ignorance of what the phrase that they chant means just adds on to their ethical failure, and certainly doesn't mitigate it. I'm just not sure how the phrase could be genocidal in nature.

The answer is 10 000-20 000 sq miles(for electricity in the US), which is the size of lake erie. For all the energy in the entire world, it's going to be a small fraction of the sahara.

That figure, I think, comes from this report that was linked in your link. Having skimmed it, I can't tell the exact methodology they use, but it seems they take some sort of weighted average of the area/energy/year of the various solar plants in the US and use some basic modeling as it extrapolates up. I see no mention of transmission losses or of attempting to model the geographical and grid locations the next solar plants would actually be built to reach the power production that actually does power all USA households.

Locations aren't fungible in power plants due in part to transmission losses (among other factors), and obviously this is even moreso for solar plants because location directly affects how much sunlight is received and also the weather which is affected by the location. As such, any sort of meaningful estimate of solar plant area we need to power all USA households is necessarily going to involve modeling specific plants in specific locations to match the load that's being drawn at various locations without negatively affecting the grid with congestion and such. Whatever modeling that's being done in that paper doesn't seem to come anywhere near that.

Solar and nuclear together, perhaps those two techs could be all we need for our energy needs; I'd say nuclear would be holding up 99% of the burden in that case, though.

That's before getting into the massive battery (and other energy storage) build up that would happen to account for solar's intermittent uptime if we were to go full 100% solar.

Fair enough in terms of happiness. I think that just shows that, for many people, including some subset of women, happiness just isn't all that important a thing to aim for in a relationship, and they would prefer to be in a relationship that causes them less happiness and more misery than one that causes greater happiness and less misery, since there are other factors in the less happy and more miserable relationship that make it overall more desirable. I'd agree that if SkookumTree thinks that women would be happier being in a relationship with an active wife-beater than with him with all his awkward shortness, then he is wrong, and not by a little, but by a lot. But I think there's a large chunk of truth to be seen here, which is that many women would prefer being in that unhappier and more miserable relationship than with him, as shown by revealed preference (I also think he vastly underestimates both the quantity and quality of women who would prefer the opposite).

Whereas if a stranger online tells me that I'm weak, lazy and pathetic, that's probably because it's true.

I agree with most of what you wrote above this, though I'm ignorant of if it is actually true if you look small, fat, or weak. But I'm not sure how you land on the conclusion that a stranger online telling you these things is an indication that these things are true. Strangers online are not known for their honesty, nor are they known for their great judgment. I think it's quite possible that they're telling you this because it's true, but I'm skeptical that it's probable. If a stranger online told you you were strong, conscientious, and great, would you also presume that it's probably because it's true?

I noticed this too, and I think it's common enough to the extent that now just adding on an apostrophe, with no "s" after, is a correct way to turn a singular noun that ends with an "s" into a plural. It's kinda like how "I could care less" is one correct way of conveying that "I care so little that it is physically impossible for me to care any less than I already do", or how putting the period or comma at the end of a quotation after the closing quotation mark, like I just did earlier in this sentence, is just as correct as putting it immediately before the quotation mark, because so many people kept doing the former despite what our English teachers taught us.

Just be careful not to get ahegao and ahoge confused. They'll lead you towards very different things.