@Spookykou's banner p

Spookykou


				

				

				
0 followers   follows 0 users  
joined 2023 March 08 17:24:53 UTC

				

User ID: 2245

Spookykou


				
				
				

				
0 followers   follows 0 users   joined 2023 March 08 17:24:53 UTC

					

No bio...


					

User ID: 2245

This is what happens when you get rid of male privilege, the boys just can't keep up, smh.

Just want to second the use of "quotes" to emphasize words with heavy connotation or specific contextual meaning, this (along with parathesis for snippets of slightly tangential information or ideas) are writing habits I picked up from commenting on SSC, probably copying some smarter/cooler person I read there (the / thing is another one).

You seem to present an argument that Ukrainians do not have a good reason to fight, because at the end of the day the culture/governance of Russia is not meaningfully different from their current existence. Ignoring for the moment the accuracy of that, you compare this to Canadians resisting Americans, this seems to imply you think such a Canadian resistance is similarly wanting for a reason, because at the end of the day the culture/governance of America would not be meaningfully different. Of course, if this is true, then in theory if the role was reversed, and Canada was to attack America, since the same conditions must necessarily be reciprocal with regards to culture/governance, it would seem America would also lack a good reason to fight.

That is, your argument, as best as I can tell, seems fully reversable for any two nations to which it was applicable. Russians would be similarly foolish for fighting against Ukrainian rule, etc.

Personally I do not really model governments as 'entities' that take action based on some sort of game theoretic rational self interest. Governments seem to be collections of people who are generally following their own individual incentives which can very easily lead to governments doing things that are not really in the interest of the government as a whole, if one was to think of it as an entity.

To the specific question, I think there was a very effective march through the institutions which caused woke/progressive ideas to reach fixation in the university system to such an extent that 90%+ of college graduates to come out of the last 15 years (give or take) are 'true believes' in as much as mid-wits can truly believe anything. One of those true beliefs is that crime is (almost?)totally down stream of societal oppression, and specifically that the criminal justice system is a sort of negative feedback loop that creates and then punishes criminals and that the cruel impositions of the criminal justice system upon the 'criminal class,' is an untenable injustice. I think once enough young professionals filter into the various DA offices of the world who hold these beliefs and similar you eventually get to a point where they are able to coordinate action and push through soft on crime practices based on the idea that contact with the criminal justice system is toxic.

You seem to have missed...the second sentence?

I could see that something had to give when they started being attacked by what they viewed as their own side.

Unless, you didn't miss it, and "internal struggle" is a totally outta pocket euphemism?

I am not sure what you think I am driving at beyond what I have stated.

I am fine with vague vibes based moral intuitions that are fuzzy around corner cases. I did not see you as having such a position. You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

This, feels like a pretty hard line rule, and I wanted to try and understand just how generalizable this was, or how contingent it was on the various relevant categories, such as, human, non-human, biological, non-biological , the 'created for a purpose' axis that you introduced, etc.

I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering, but if super smart chimps are off the table, what about aliens with similar intelligence to humans? I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans? Would they hold a different (higher?) moral position than dogs?

How many of those capabilities does an AI need to have before it would be wrong to enslave it? How important is the biological/synthetic distinction?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

I was not specifically interested in the pedo/age aspect of 'child' but the sense in which a person 'creates' another person.

I really was trying to dig into the idea that because humans 'created' something that means something morally. For example, is there a moral difference between two men going into a futuristic IVF clinic and genetically designing a child and growing it in an artificial womb for the purpose of abusing it (waiting till it is 18 years old). Compared with two men genetically engineering an uplifted animal with similar mental faculties to a human for the purpose of abusing it (waiting till it is an 'adult'). For me, if 'creation' is a relevant term, these two things are indistinguishable on that front, they are distinguishable on the, one thing is a human and the other is not, which seems to be the actual point of consideration for you.

The dog fucking was a word replace for android cat girl fucking, dogs and android cat girls seem to be similarly positioned as, not human. I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

I am not a huge fan of education, and would argue that we as a species don't have a great idea of how to even do 'education' as it is often presented. I suspect that there is an education floor that is necessary and useful though and that our modern education system is more than sufficient to meet that floor, expenditures in excess of it are probably low value. However I believe that for the vast majority of it's existence the American public school system has been an effective redistributive program that produced more value than it cost us as a nation. I think it is increasingly difficult to do good welfare programs because a bunch of sociologists decided to make a bunch of shit up 40 years again and nobody has ever called them on it, but we could do better than we currently are pretty easily. I do not have a strong opinion if any given current program is positive sum, but I think some probably already are, and we could do better than we currently do.

What does credibility mean to you?

I think it is probably true that doxxing poses more of a physical risk to women than men.

I think this can be read a few ways, I think your reading is something like, therefore the number of women physically assaulted after being doxxed would be higher than the number of men, and you don't think the stats would back that up.

A different reading, would be that a (specific) woman being physically assaulted after being doxxed is in more danger, because they are physically weaker than men.

It is possible this is also not borne out by the stats, but they would be different stats, like, how often do women survive a physically violent attack compared to men.

All of these stats are complicated though, because I doubt the whole of the difference would be because men are just 'randomly' more likely to be victims.

I don't see how the goal posts ever moved. The original claim was that woman are easier to victimize (because they are physically weaker than men), men being more likely to be victims in general does not seem to preclude that.

As a victim of MGM I have always found complaints about condoms to be wild, I can hardly tell the difference, so maybe you are onto something.

The sensitivity thing is also interesting as, to my mind, increased sensitivity would be strictly a bad thing as a man. If I could magically wish for more of it, I wouldn't. The physical pleasure from sex is pretty far from the top of the list of things I enjoy about sex. It seems to me that sensitivity would trade off directly with endurance. I really viscerally enjoy the sense of masculine prowess I get from absolutely destroying a woman for a prolonged session, but I am only really able to achieve that with mental and physical tricks to actively reduce my sensitivity.

Upon further reflection, I believe the term I'm looking for is 'inhuman.' To me, those three buildings feel inhuman, ugly, and unnatural. They seem like the creations of a lotus eater who began with a peculiar shape, and then attempted to transform that design into a functional building after the fact, rather than starting with human-friendly, functional spaces and adding aesthetic elements later to enhance their beauty.

These buildings remind me of high-fashion that seldom leaves the runway, worn perhaps only by the designers themselves, or the avant-garde in gastronomy featuring frozen bubbles of crab purée crowned with sea water-infused foam. The objective there isn't to create good clothes or delicious food, rather to create 'art', and in doing so, the primary purpose, and an ineffable authenticity is lost.

When you couple this with the degradation of fine art more generally, I think everything comes together. Modern architects, it seems, are crafting ugly art installations that begrudgingly take on the role of 'buildings' out of necessity.

I think you just make the process for acquiring a gun very onerous while technically not restrictive. You make people go, in person, to county clerks offices, different ones, multiple times over the course of several months, to fill out forms. It is impossible to fill out these forms in such a way as to deny you a gun, and at no point in the process can your application be denied. A high functioning and responsible adult on average can complete the process in three months. This is the only way that a person can buy a gun. I think this stops more 'school shootings' than red flag laws will, and without the negative side effects of red flag laws. I think in general people underestimate the power of trivial inconveniences/annoyance to shape human behavior. All the traffic fatality information in the world pales in effectiveness compared with an annoying beeping sound. All social engineering attempts that don't reduce down to annoying beeping sounds, should not be tried until annoying beeping sound solutions have been tried.

There is a strain of thought that focuses on the arbitrariness of socially constructed things that has never sat right with me as if all illusions are created equally. Let's imagine two people whose self-illusion is that of a Star Trek fan, one of them has seen every episode and movie, they know the plots of every episode, and can quote sections of the script by heart. The other Star Trek fan is confusing Star Trek with Star Wars. The feeling of being a 'Star Trek Fan' is a personal illusion, that as far as I can tell would fall into the same category of illusions as the feeling of being a 'woman' as you are using the term. Yet I feel very comfortable saying that one of those two Star Trek fans is 'wrong' in their personal self-illusion. To add a tiny bit of meat to my hypothetical, if I had to pick one of the two people to get a free pass for a hand-shaking event with Jeri Ryan, I would pick the 'real' Star Trek fan.

I don't understand most of your comment, I am not a Less Wrong reader.

To try to explain what I said, imagine a person who says that men wearing pants is 'arbitrary'. I think that person is trying to communicate that men wearing pants is random, without underlying reason or cause. I think that person is wrong.

Lae'zel has that wonderful teef-ling bit that is probably the most endearing character interaction in the whole game.

is an unstated up to this point

I am wrong here, you have expressed your human supremist views multiple times. Rather I would say I was confused on the exact shape of those views and what the underlying reasoning was, but here the implication is that there is not an 'underlying' reason, and it is explicitly the human vs non-human distinction that is important. I think this was confusing for me because when I think about assigning moral worth to things other than humans I do it primarily by thinking about how human-like, the thing is. So for example, I care more about chimps>dogs>birds>bugs, etc (in the abstract, I have way more actual contact with dogs but if I was reasoning about hypotheticals where different types of animals are being tortured I think torturing a chimp is worse than torturing a dog, and both are bad). I have not really seen a clear explanation for why this line of moral reasoning would not be applicable to artificial life in the abstract. You seem to hold that just, categorically, it doesn't/shouldn't. Does that sound right?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

Your answer to this is, no you actually don't think they can meaningfully suffer in a humanlike way, and almost everything is resolved.

I have no idea how trying to tease this out of you constitutes a 'trick question' when your answer is an unstated up to this point tautology.

I will maintain that I think my reading of your post (and subsequent posts) is reasonable, and actually far closer to any sort of plain English reading of your post, than your reply here.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

My reading, AI can suffer in a morally relevant way, but I don't care.

Your 'intended' meaning, AI are incapable of suffering in a morally relevant way.

As a brief aside, I have repeatedly at this point stated why I actually engaged with your post in the first place. The moral idea that I thought was interesting enough to ask questions about was the idea that the purposeful creation of a thing informs the moral relevance of that thing with regard to its purpose. I already admitted a while ago that I probably read too much into your post and you do not actually have a strong, creator derived moral position, but it was that position that all three of my questions in my first reply were trying to engage with. While my opening sentence attempted to frame my reply around that idea. My second reply was largely in response to your answer to the third question, in which you seemed to be saying that creating and enslaving a sub-species of intelligent creatures is fine and just a default result of a human first morality, which also seemed pretty extreme to me.

I am sorry if I keep bringing up sex, but it seems particularly germane when we are talking about the moral implications of 'intelligent sex robots'. I get it, your position is that they are not actually meaningfully 'intelligent', but I struggle to see how the accusation is an unwarranted stretch for someone who thinks they could be meaningfully intelligent. Especially given my interpretation of your position as outlined above.

Maybe also relevant, I was not at all asking about the actual state of the technology or predicting that morally relevant cat-bots are around the corner. I assumed my, genetically generating an entire slave species, hypothetical, would clearly put this into the, reasoning about the morality of human-like intelligence, camp, and out of the, hypothesizing about near term technology camp.

If you saw in me someone who thinks Human like AI is near, then I must disappoint. I am also not an AI doomer, and personally would consider myself closest to an AI accelerationist. I have no sympathy with AI ethicist and little sympathy for AI safety. I just don't see any reason why I should preclude the possibility of AI achieving an internal state such that I would extend to them moral considerations such that I would object to them being enslaved/abused/killed.

I do not understand the moral relevance of "built for humans by humans".

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

I was speaking specifically to this comment thread/similar comment threads here on the Motte and am not sure how people more generally use 'HBD awareness' in conversation.

From this thread, you said, paraphrasing, 'Assuming for the sake of the argument that HBD is correct, what does being "HBD aware" add,' and 4bpp, again paraphrasing, explained that HBD is an 'alternative to the normal structural racism argument used to explain disparate outcomes, with HBD we could stop enforcing disparate impact laws, because disparate impact would not longer be considered iron-clad proof of racial discrimination'. Finally Doubletree2 chimed in, yes I am still paraphrasing, saying that 'explaining HBD to the structural racism people would just convince them that structural racism is correct, cause you sound like a racist'. I was responding to what I felt was Doubletree2's confusion as to what was being discussed, and that nobody was using 'HBD awareness' to mean, telling progressives HBD things. In both your prompt and 4bpp's response it is a basic assumption of the thought experiment that HBD is accepted enough to inform policy.

I think the phrase 'HBD awareness' is being used specifically to side step the practical political realities of how unpopular the concept is. That is, I do not think most people mean a literal awareness campaign where they want to just go around and tell progressives that race realism is correct, or some such, and think that would work. I assume when 'HBD awareness' is being brought up it is normally presupposing a world where people are at least open to being convinced that HBD is correct, or already think that it is correct, and then reasoning about the possible policy realities from that point.

This is such a commonly expressed idea that is so alien to me I would love to get more details on it.

I don't even particularly love Christmas, I have spent more than a couple of them alone, but the only part of Christmas Creep that I find even slightly objectionable is the music. I enjoy Christmas decorations, I think they are cozy and festive, and often well done. It takes a lot of time and effort to put up good Christmas decorations, and it seems crazy to me to go through all that trouble and only put them up a hand full of days in advance and take them down immediately. Enjoy the atmosphere. Light a fire in the fire place. Drink more hot coco and mulled wine.

Thinking about my own preference here, it reminds me of Diamond Age, and the phyles/claves which I absolutely loved. I guess to try and name this nebulous concept I am feeling, it might be something like Aesthetic Intentionality. Anything that pushes against the dead aesthetics of 'universal culture' is at least interesting if not strictly preferable to me.

I don't like most of the Pritzger winners, I went and looked at each of your links, and everyone except Santiago(whose building all look like different shots of the same building, lots of curves, I hate them all as well) has at least one 'concrete box' building. Sure, maybe it is actually a glass and steel box, and it is on it's side, or a glass and steel trapezoid, but personally, 'concrete box' is not a literally description. I would bet that the average person who complains about 'modern architecture', and 'brutalism', and 'concrete boxes', would also hate everything in Rem Koolhaas's portfolio, even if none of them are technically any of those things. Could you please tell me an acceptable short hand so that I can complain about these things without someone complaining that I am using the wrong terms of art. It is not as simple as all new buildings, the campus in the AIA link is mostly fine, although there are modern(though probably not technically) elements that I think strictly detract from the design. Is there a word or phrase that I can use to properly express my distaste for most (maybe all) architectural trends that have emerged over the last 50-100 years?

You would expect to see little to no reduction in fatalities from this kind of proposal, the point is to stop the Uvalde, "kid goes to a store, legally buys a gun, kills a bunch of children." Which is the most inflammatory possible news story that provides the most ammunition for gun-control advocates, even if it is a rounding error in terms of total gun deaths.