@Spookykou's banner p

Spookykou


				

				

				
0 followers   follows 0 users  
joined 2023 March 08 17:24:53 UTC

				

User ID: 2245

Spookykou


				
				
				

				
0 followers   follows 0 users   joined 2023 March 08 17:24:53 UTC

					

No bio...


					

User ID: 2245

This is what happens when you get rid of male privilege, the boys just can't keep up, smh.

The first argument that I find convincing against the standard libertarian positions is that most people are actually really stupid and a paternalistic government that treats them like children generally creates better outcomes(this is the real secret to Singapore). Take a normal Algebra or English class at a middle of the pack state school, most of the people in the room are still just guessing passwords. I am skeptical that anyone under 110 IQ can actually understand Algebra or get anything from text beyond the explicit meaning of the words without heavy priming. I would put the bottom 60% of Americans in approximately the same bucket as thirteen year-olds and think a kind government should make more choices for them, not less.

The second argument is zero sum positional status games, we would all be better off if society could collectively agree that all jobs get two months paid leave(or whatever) and don't allow any 'sane individuals' to trade that away for higher pay, because they all will, even though the marginal value of an extra dollar is trash, because humans.

I am not a huge fan of education, and would argue that we as a species don't have a great idea of how to even do 'education' as it is often presented. I suspect that there is an education floor that is necessary and useful though and that our modern education system is more than sufficient to meet that floor, expenditures in excess of it are probably low value. However I believe that for the vast majority of it's existence the American public school system has been an effective redistributive program that produced more value than it cost us as a nation. I think it is increasingly difficult to do good welfare programs because a bunch of sociologists decided to make a bunch of shit up 40 years again and nobody has ever called them on it, but we could do better than we currently are pretty easily. I do not have a strong opinion if any given current program is positive sum, but I think some probably already are, and we could do better than we currently do.

Did I really need to include an exhaustive list of, 'things that make people happy but are bad for them so I would not want to subsidize those things'?

Alternatively, do you think it is literally impossible to have a 'positive-sum' redistributive program that does not boil down to buying people Heroin?

As a victim of MGM I have always found complaints about condoms to be wild, I can hardly tell the difference, so maybe you are onto something.

The sensitivity thing is also interesting as, to my mind, increased sensitivity would be strictly a bad thing as a man. If I could magically wish for more of it, I wouldn't. The physical pleasure from sex is pretty far from the top of the list of things I enjoy about sex. It seems to me that sensitivity would trade off directly with endurance. I really viscerally enjoy the sense of masculine prowess I get from absolutely destroying a woman for a prolonged session, but I am only really able to achieve that with mental and physical tricks to actively reduce my sensitivity.

I don't see how the goal posts ever moved. The original claim was that woman are easier to victimize (because they are physically weaker than men), men being more likely to be victims in general does not seem to preclude that.

Just want to second the use of "quotes" to emphasize words with heavy connotation or specific contextual meaning, this (along with parathesis for snippets of slightly tangential information or ideas) are writing habits I picked up from commenting on SSC, probably copying some smarter/cooler person I read there (the / thing is another one).

You seem to have missed...the second sentence?

I could see that something had to give when they started being attacked by what they viewed as their own side.

Unless, you didn't miss it, and "internal struggle" is a totally outta pocket euphemism?

I am not sure what you think I am driving at beyond what I have stated.

I am fine with vague vibes based moral intuitions that are fuzzy around corner cases. I did not see you as having such a position. You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

This, feels like a pretty hard line rule, and I wanted to try and understand just how generalizable this was, or how contingent it was on the various relevant categories, such as, human, non-human, biological, non-biological , the 'created for a purpose' axis that you introduced, etc.

I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering, but if super smart chimps are off the table, what about aliens with similar intelligence to humans? I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans? Would they hold a different (higher?) moral position than dogs?

How many of those capabilities does an AI need to have before it would be wrong to enslave it? How important is the biological/synthetic distinction?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

I think it is probably true that doxxing poses more of a physical risk to women than men.

I think this can be read a few ways, I think your reading is something like, therefore the number of women physically assaulted after being doxxed would be higher than the number of men, and you don't think the stats would back that up.

A different reading, would be that a (specific) woman being physically assaulted after being doxxed is in more danger, because they are physically weaker than men.

It is possible this is also not borne out by the stats, but they would be different stats, like, how often do women survive a physically violent attack compared to men.

All of these stats are complicated though, because I doubt the whole of the difference would be because men are just 'randomly' more likely to be victims.

If it was just America vs EU I would have a simple theory that sounds right to me. American youth have always grown up in 'these' conditions, and so have antibodies and memes that allow them to ignore being mugged or having their bike stolen, in a way that the average European does not, because 'these' conditions, brought on by the refugee crisis, are a very recent change with only the youngest generation really growing up in it. The problem with this theory is the UK, but maybe the global internet means that the protective American memes are actually just protective English memes.

I am sorry but this still does not seem very relevant to what I was trying to get across, I will try again.

I am specifically asking if the demand for people to disavow a position they have not advanced is an isolated demand for rigor only being brought out in this instance, or a standard practice for productive conversations.

@ymeskhout has themselves acknowledged that it is, if not an 'isolated demand for rigor' a 'specific demand for rigor' because they think it is only appropriate when the person is 'slippery' or the topic is particularly fraught. Personally, I think this allows @ymeskhout far too many degrees of freedom, that this is functionally an isolated demand, and the correct approach would be to treat people as bad actors only after they have behaved badly, state clearly what you expect from them before continuing to engage, or simply not engage with commenters who you think are bad faith.

I am not replying to the broader conversation with @ymeskhout and have not participated in it. If specific users are behaving badly and @ymeskhout knows this and wants to act on that information, I don't see any problem with that. If the initial comment was, I can't have a productive conversation with @ motte-user-i-just-made-up without them first acknowledging that all of their previous election fraud claims turned out to be wrong, I would not have commented.

Do you think, as a general rule, it is reasonable to demand that people disavow popular Bailey positions that they have not personally advanced, simply because the topic is one in which Motte and Bailey arguments are common? I have a strong instinctive dislike for this kind of compelled position taking, it feels like a 'debate tactic', which is why I also asked about tabooing the word stolen. If @ymeskhout had simply said, it is necessary to state ones positions clearly and unambiguously, which they claim is all the disavowal is supposed to accomplish anyway, I would not have commented.

You seem to present an argument that Ukrainians do not have a good reason to fight, because at the end of the day the culture/governance of Russia is not meaningfully different from their current existence. Ignoring for the moment the accuracy of that, you compare this to Canadians resisting Americans, this seems to imply you think such a Canadian resistance is similarly wanting for a reason, because at the end of the day the culture/governance of America would not be meaningfully different. Of course, if this is true, then in theory if the role was reversed, and Canada was to attack America, since the same conditions must necessarily be reciprocal with regards to culture/governance, it would seem America would also lack a good reason to fight.

That is, your argument, as best as I can tell, seems fully reversable for any two nations to which it was applicable. Russians would be similarly foolish for fighting against Ukrainian rule, etc.

I don't think that only 'technical' contradiction, in which it is logically impossible for two ideas to comport, get to be called contradictions. I think it is very common and normal for people to use the word contradiction to describe an apparent disconnect or incongruity between two things, that need not be completly irreconcilable.

I am not sure if you thought that my argument was, the OP's example is not contradictory, but my example is, but I was not trying to make that argument. I used words like particularly, and compared to. Also, this "'Progressives seem to hold totally contradictory values'." bit was in quotes because it was intended to describe a vibe, that I felt was central to the OP. I just felt like the example the OP happened to be upset by was a kind of weak sauce example of this kind of contradiction.

Entirely from the correctness or incorrectness of the political views themselves, there's no real contradiction between "I support LGBT+/feminism/whatever" and "I am against Israel's actions in Gaza." "I support Hamas".

So, in a very narrow technical sense, I support [Blank] and I support [Group that hates Blank and actively practices violence against Blank] are not logically impossible to hold within the same mind.

Still I am confident saying that not only would most people recognize the incongruity in those statements, if I could ask progressives about a different topic where they were not primed to view it as an attack or a gotcha, most of them would recognize the contradiction in such a statement as well. In fact, just go look at the never ending stream of "if you were really pro-life' memes/posts/articles for a live (and much worse) example.

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

Your answer to this is, no you actually don't think they can meaningfully suffer in a humanlike way, and almost everything is resolved.

I have no idea how trying to tease this out of you constitutes a 'trick question' when your answer is an unstated up to this point tautology.

I will maintain that I think my reading of your post (and subsequent posts) is reasonable, and actually far closer to any sort of plain English reading of your post, than your reply here.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

My reading, AI can suffer in a morally relevant way, but I don't care.

Your 'intended' meaning, AI are incapable of suffering in a morally relevant way.

As a brief aside, I have repeatedly at this point stated why I actually engaged with your post in the first place. The moral idea that I thought was interesting enough to ask questions about was the idea that the purposeful creation of a thing informs the moral relevance of that thing with regard to its purpose. I already admitted a while ago that I probably read too much into your post and you do not actually have a strong, creator derived moral position, but it was that position that all three of my questions in my first reply were trying to engage with. While my opening sentence attempted to frame my reply around that idea. My second reply was largely in response to your answer to the third question, in which you seemed to be saying that creating and enslaving a sub-species of intelligent creatures is fine and just a default result of a human first morality, which also seemed pretty extreme to me.

I am sorry if I keep bringing up sex, but it seems particularly germane when we are talking about the moral implications of 'intelligent sex robots'. I get it, your position is that they are not actually meaningfully 'intelligent', but I struggle to see how the accusation is an unwarranted stretch for someone who thinks they could be meaningfully intelligent. Especially given my interpretation of your position as outlined above.

Maybe also relevant, I was not at all asking about the actual state of the technology or predicting that morally relevant cat-bots are around the corner. I assumed my, genetically generating an entire slave species, hypothetical, would clearly put this into the, reasoning about the morality of human-like intelligence, camp, and out of the, hypothesizing about near term technology camp.

If you saw in me someone who thinks Human like AI is near, then I must disappoint. I am also not an AI doomer, and personally would consider myself closest to an AI accelerationist. I have no sympathy with AI ethicist and little sympathy for AI safety. I just don't see any reason why I should preclude the possibility of AI achieving an internal state such that I would extend to them moral considerations such that I would object to them being enslaved/abused/killed.

I was not specifically interested in the pedo/age aspect of 'child' but the sense in which a person 'creates' another person.

I really was trying to dig into the idea that because humans 'created' something that means something morally. For example, is there a moral difference between two men going into a futuristic IVF clinic and genetically designing a child and growing it in an artificial womb for the purpose of abusing it (waiting till it is 18 years old). Compared with two men genetically engineering an uplifted animal with similar mental faculties to a human for the purpose of abusing it (waiting till it is an 'adult'). For me, if 'creation' is a relevant term, these two things are indistinguishable on that front, they are distinguishable on the, one thing is a human and the other is not, which seems to be the actual point of consideration for you.

The dog fucking was a word replace for android cat girl fucking, dogs and android cat girls seem to be similarly positioned as, not human. I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

What does credibility mean to you?

Upon further reflection, I believe the term I'm looking for is 'inhuman.' To me, those three buildings feel inhuman, ugly, and unnatural. They seem like the creations of a lotus eater who began with a peculiar shape, and then attempted to transform that design into a functional building after the fact, rather than starting with human-friendly, functional spaces and adding aesthetic elements later to enhance their beauty.

These buildings remind me of high-fashion that seldom leaves the runway, worn perhaps only by the designers themselves, or the avant-garde in gastronomy featuring frozen bubbles of crab purée crowned with sea water-infused foam. The objective there isn't to create good clothes or delicious food, rather to create 'art', and in doing so, the primary purpose, and an ineffable authenticity is lost.

When you couple this with the degradation of fine art more generally, I think everything comes together. Modern architects, it seems, are crafting ugly art installations that begrudgingly take on the role of 'buildings' out of necessity.

I think you just make the process for acquiring a gun very onerous while technically not restrictive. You make people go, in person, to county clerks offices, different ones, multiple times over the course of several months, to fill out forms. It is impossible to fill out these forms in such a way as to deny you a gun, and at no point in the process can your application be denied. A high functioning and responsible adult on average can complete the process in three months. This is the only way that a person can buy a gun. I think this stops more 'school shootings' than red flag laws will, and without the negative side effects of red flag laws. I think in general people underestimate the power of trivial inconveniences/annoyance to shape human behavior. All the traffic fatality information in the world pales in effectiveness compared with an annoying beeping sound. All social engineering attempts that don't reduce down to annoying beeping sounds, should not be tried until annoying beeping sound solutions have been tried.

There is a strain of thought that focuses on the arbitrariness of socially constructed things that has never sat right with me as if all illusions are created equally. Let's imagine two people whose self-illusion is that of a Star Trek fan, one of them has seen every episode and movie, they know the plots of every episode, and can quote sections of the script by heart. The other Star Trek fan is confusing Star Trek with Star Wars. The feeling of being a 'Star Trek Fan' is a personal illusion, that as far as I can tell would fall into the same category of illusions as the feeling of being a 'woman' as you are using the term. Yet I feel very comfortable saying that one of those two Star Trek fans is 'wrong' in their personal self-illusion. To add a tiny bit of meat to my hypothetical, if I had to pick one of the two people to get a free pass for a hand-shaking event with Jeri Ryan, I would pick the 'real' Star Trek fan.

I think things like clean air and water, cfc ban, lead, seat belts, and social security are all examples of governments being able to do exactly what I think they can do/want them to do, so I don't feel much need to argue about the platonic ideals of various organizational structures which imply that my preferences are impossible. They are possible, we live in a world where they are being satisfied to an extent, and I just want more.

I don't understand most of your comment, I am not a Less Wrong reader.

To try to explain what I said, imagine a person who says that men wearing pants is 'arbitrary'. I think that person is trying to communicate that men wearing pants is random, without underlying reason or cause. I think that person is wrong.

Lae'zel has that wonderful teef-ling bit that is probably the most endearing character interaction in the whole game.

is an unstated up to this point

I am wrong here, you have expressed your human supremist views multiple times. Rather I would say I was confused on the exact shape of those views and what the underlying reasoning was, but here the implication is that there is not an 'underlying' reason, and it is explicitly the human vs non-human distinction that is important. I think this was confusing for me because when I think about assigning moral worth to things other than humans I do it primarily by thinking about how human-like, the thing is. So for example, I care more about chimps>dogs>birds>bugs, etc (in the abstract, I have way more actual contact with dogs but if I was reasoning about hypotheticals where different types of animals are being tortured I think torturing a chimp is worse than torturing a dog, and both are bad). I have not really seen a clear explanation for why this line of moral reasoning would not be applicable to artificial life in the abstract. You seem to hold that just, categorically, it doesn't/shouldn't. Does that sound right?

I do not understand the moral relevance of "built for humans by humans".

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?