@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

Gender dysphoria and its similarities to more general body dysphoria

So consider the /r/loseit subreddit. There are a ton of people on there who hate their appearance and would like it to be different. Consider also the community of people who get plastic surgery.

Hating your body is a very universal human experience! An experience that sucks! The interesting thing here is how the different types of "hating your body" are perceived radically differently by wider society. As in:

(1) Consensus is that weight-based body dysphoria is reasonable and you should fix it by dieting. (It can also be fixed by medication-- semaglutide/tirzepatide, in particular-- but this has not achieved widespread social acceptance.) There is also a fat-acceptance movement, but this is niche and is discouraged by obesity being comorbid with a ton of medical issues.

(2) Consensus is that age-based and (more broadly) ugliness-based body dysphoria is something you should just get over instead of addressing directly. Plastic surgery exists, but it does not have widespread social acceptance, and it is socially acceptable to make fun of women whose plastic surgeries are bad enough to be noticeable.

The common line of "cosmetic surgery won't make you feel better about yourself" is contradicted by pretty clear evidence on average; a cursory google scholar search gets us https://academic.oup.com/asj/article/25/3/263/227685 , which claims the following:

Eighty-seven percent of patients reported satisfaction with their postoperative outcomes. Patients also reported significant improvements in their overall appearance, as well as the appearance of the feature altered by surgery, at each of the postoperative assessment points. Patients experienced significant improvements in their overall body image, their degree of dissatisfaction with the feature altered by surgery, and the frequency of negative body image emotions in specific social situations. All of these improvements were maintained 12 months after surgery.

(3) Gender dysphoria has, of course, gotten a huge amount of play in the media since addressing it optimally requires surgery and hormones in adolescence, when we mostly accept that people have not yet reached their full capacity for judgement. Plus, even in rich countries bio-engineering has not reached nearly the place it would need to in order to make neogenitalia function properly, or for "passing" to be easy for transitioners.

Is the current push for social acceptance of gender-based body modification something that will spread into other kinds of artificial body modification, such as plastic surgery for appearance or medications for weight loss?

I certainly hope so!

Most of my thoughts on this are driven by the practicalities of things we can do right now; I see no reason, assuming all technological restraints were lifted, that anyone shouldn't be able to do anything they want with their bodies.

Similarly, I feel like the only strong arguments against transitioning genders stem from the fact that our bio-engineering isn't up to snuff.

To be fair, I think the only real hate for transracial people comes from the social-justice left; as far as I've heard nobody moderate to conservative has shown the slightest bit of interest. Admittedly this is also because the social-justice left is by far the segment of society most interested in any given person's race.

There's a reason the Rachel Dolezal transracial flareup happened to be around a college instructor! (Because if, say, the head of the National Rifle Association was transracial nobody would care even a little bit. Why would they?)

The process of losing weight is mostly eating a more normal amount of calories and engaging in physical activity

Worth pointing out that diet and exercise alone have extremely poor intent-to-treat efficacy, generally between 2% and 4% of body weight as measured by most studies. For instance, see https://www.nature.com/articles/0803015 . Medication dramatically improves weight successfully lost (see also: https://www.nejm.org/doi/full/10.1056/NEJMoa2032183)

You're definitely right that diet-and-exercise studies include a huge range of effect sizes. I'm not 100% certain how to interpret this; my suspicion is that there's a hidden intervention sliding scale between "doctor says to you, with gravitas, 'eat healthier'" and "nutritionist locks you in a box and hand-feeds you kale that they calorie-counted themselves." And meta-analyses do a poor job differentiating between these, including the one I linked.

I would expect that more dramatic effects combined with heavier fadeout of results is a natural indicator that a particular study is doing an unsustainably aggressive intervention; in the meta-analysis, it indicated that in both diet-only and diet-and-exercise groups everyone regained about half the weight after a year. Which still does leave 14 pounds, and that isn't anything to sneeze at.

You are also right that there are two ways of doing these studies-- "as prescribed" and "intent-to-treat", and as-prescribed results will always show much better effect sizes than intent-to-treat results. In a sense, intent-to-treat isn't measuring the results of the treatment as much as it is measuring the results of prescribing the treatment. And as-prescribed, diet and exercise will always be 100% effective at inducing any amount of weight loss almost by definition. Hard to beat that, really.

But on the other hand... I kinda figure that intent-to-treat is a fairer representation of real life? In the sense that in real life people don't have the option of getting locked in the nutritionist-box indefinitely. And if two treatments are both effective as-prescribed, but the first one has much worse intent-to-treat efficacy, I want the second treatment.

Hey, there's a reason I'm drawing this comparison here rather than, say, /r/politics.

Though I dislike the characterization that "merely feeling better about yourself" is something frivolous and unimportant. I do agree with you that trans advocates would absolutely object to my characterization above, but I think this is basically just respectability politics; people can and should reshape their body as much as technology allows to suit their desired aesthetics.

The fact that trans advocates would be likely to find the parallel unflattering I think more speaks to societal puritanism around self-modifying your appearance than it does the parallel being inappropriate.

Yeah, sorry, went on a bit of a tangent there. Anyway.

I feel a lot of skepticism about bad diet and exercise habits being the primary causal drivers of obesity, since on a personal level I know some people who struggle to lose weight in spite of vigorous and frequent exercise and a diet heavy in foods traditionally considered healthy.

I expect that genetics has a hell of a lot to do with whether somebody becomes fat or not, and that "well you probably have bad diet and exercise habits" is a close-to-hand explanation that is both extremely difficult to falsify and which satisfies our instincts toward the Just World Hypothesis. There might also be chemical contaminants involved.

I've gotten the impression from trans people I see on social media-- trans women, mostly-- that aesthetics are very important to them, and surgery and hormones help a lot with this. They want not only to be a woman, but also an attractive woman. And why wouldn't they? Attractiveness is an important quality-of-life determinant and I disapprove of pretending it's not.

I think it's a lot like the question of "Why don't you donate 10% of your income to charity?"

"Oh, because I want to spend the money on other things."

"I don't feel that's a reasonable answer that discharges you of your moral obligations."

"Okay, but I'm still not doing it."

What would you accept as non-rhetorical research proving the arrival of AGI that isn't just the arrival of AGI?

I dunno, the parent comment by sulla strikes me as basically calling out a similar (though more inflammatory) situation. We have two possible meanings of the term "bias" in common use, and these two meanings are:

  1. Not faithfully representing statistical realities present in the data.

  2. Not faithfully representing the statistical outcomes that we would like to see in the data-- most commonly, which reflect reality except for not showing differences based on race or gender.

These are, of course, mutually exclusive definitions; e.g. as pointed out in your article the president of the United States should always be drawn as male using definition (1) and should half the time be female using definition (2) . Likewise, classifiers determining how likely someone is to commit a crime ALSO have to make a decision between definitions (1) and (2) while facing the complicated issue of how to avoid public controversy over admitting that these are different things.

You suggest a third, equally plausible definition (3): "Not faithfully representing the statistical outcomes present IN REAL LIFE (as opposed to just the data being trained on)."

That actually strikes me as brutally difficult, running into the same basic issues as fact checkers do now-- evaluating what is true or false in real life is really hard and intersects with political agendas in such a way as to make it even harder. And how do you even evaluate if you succeeded? Don't get me wrong, i think it's reasonably likely that some generative models will get fine-tuned on specific datasets curators will have labeled as similar to "real life" along various dimensions. But I would not anticipate that this will end up becoming the norm.

As an aside, I think it makes a lot of sense that fundamentally the problem being solved by companies is "how do we stop journalists from agitating about our platform", not anything more interesting or important, and the "debiasing" solutions put in place reflect this reality.

I'll point out that the problem might not be so unsolvable as you describe; prompt engineering being what it is, a very thinkable (but dystopian) way some more-capable future version of DALL-E might resolve this is by adding to the prompt "and also, make sure to never portray X ethnicity negatively."

Leaving aside whether "passing" as a concept is intrinsically problematic (probably? man, hell if I know) I definitely think there's pretty strong (for me) delineations between degrees-of-passing.

  1. I cannot distinguish this person from being a cisgendered man without them telling me verbally.

  2. I can tell this person is attempting to pass as a woman, but my hindbrain is continuing to helpfully inform me that this person is a man.

  3. I can tell this person is a trans woman based off of specific conscious cues, but she passes successfully enough to where my hindbrain perceives her as a cis woman.

  4. As far as I can tell this person's a cis woman.

"Passing", depending on context, either means (3) or (4). I definitely will accidentally misgender people who fall into categories (1) and (2), since talking with or about them involves constantly overriding my typical social scripts for dealing with people I've internally categorized as one gender or the other. I don't think that my unconscious sense of other peoples' gender actually distinguishes between "cis" and "trans".

As an aside, trans people are definitely susceptible to the Gaudy Graveyard Effect where the trans movement tends to be identified by people in categories (1) and (2) because that's where all the controversy is centered. Culture war stuff aside I don't think most people have any visceral problem with trans people in categories (3) or (4).

The details of what counts as "negative" would be determined based on the language model's own ideas of what constitutes "negative" based on its time spent with the training data. This is likely, for the most part, to align with conventional understandings of what is "negative".

Oh, shit, I didn't know about the Black Donald Trump thing. That's hilarious.

Yeah, okay, it's a fair cop; even such a policy as I describe would result in amazing PR debacles.

I'm kind of underwhelmed by the Huel Black Chocolate flavor. It's... very okay. Any suggestions on things I should add to it for additional flavor?

At the moment I'm just mixing it with whole milk.

The social-rules-about-reracialization thing is definitely a reasonable one; that's a significant issue that would result in many funny PR disasters.

On reflection vulnerability to adversarial prompt injection seems almost innate to the technology, considering both the above "person holding a sign that says " attack and also the more recent one with remote.ly.

There is no problem humans face that cannot be reframed as a programming or automation problem. Need food? Build a robot to grow it for you, and another to deliver it to your house. Need to build a robot? Make a factory that automates robot fabrication. Need to solve X medical issue? Write a program that figures out using simulations or whatever how to synthesize a chemical or machine that fixes it. Given this, the question of "what happens to programmers when computers can write code for arbitrary domains just as well as programmers can" answers itself.

I expect that fully automating coding will be the last job anybody ever does, either because we're all dead or we have realized Fully Automated Luxury Space Communism.

There is a sense in which the job of coding has already been automated away several times. For instance, high-level languages enable a single programmer to accomplish work that would be out of the grasp of even a dozen assembly-language programmers. (This did, in fact, trash the job market for assembly-language programmers.)

The reason this hasn't resulted in an actual decline in programmer jobs over time is because each time a major tool is invented that makes programming easier (or eliminates the necessity for it in particular domains), people immediately set their sights on more-difficult tasks that were considered impractical or impossible in the previous paradigm.

I don't really see the mechanism by which AI-assisted programming is different in this way. Sure, it means a subset of programming problems will no longer be done by humans. That just means humans will be freed to work on programming and engineering problems that AI can't do, or at least can't do yet; and they'll have the assistance of the AI programmers that automated away their previous jobs.

And if there are no more engineering or programming problems like that, then you now have Automated Luxury Space Communism.

For (1), what you're saying is certainly true; the better abstractions and better tooling has been accompanied by growth in hardware fundamentals that cannot be reasonably expected to continue.

(2) is where I'm a lot more skeptical. A sufficient-- though certainly not necessary-- condition for a valuable software project is identifying a thing that requires human labor that a computer could, potentially, be doing instead.

The reason I called out robotics specifically is because, yeah, if you think about "software" as just meaning "stuff that runs on a desktop computer", well, there's lots of spheres of human activity that occur away from a computer. But the field of robotics represents the set of things that computers can be made to do in the real world.

That being so, if non-robotics software becomes trivial to write I expect we are in one of four possible worlds:

World one: General-purpose robotics-- for example, building robots that plant and harvest crops-- is possible for (AI-assisted) human programmers to do, but it's intrinsically really hard even with AI support, so human programmers/engineers still have to be employed to do it. This seems like a plausible world that we could exist in, and seems basically similar to our current world except that the programmer-gold-rush is in robotics instead of web apps.

World two: General-purpose robotics is really easy for non-programmers if you just make an AI do the robot programming. That means "programming" stops being especially lucrative as a profession, since programming has been automated away. It also means that every other job has been (or will very soon be) automated away. This is Fully-Automated Luxury Space Communism world, and also seems broadly plausible.

World three: General-purpose robotics is impossible at human or AI levels of cognition, but non-robotics AI-assisted programming is otherwise trivial. I acknowledge this is a world where mass layoffs of programmers would occur and that this would be a problem for us. I also do not think this is a very likely scenario; general-purpose robotics is very hard but I have no specific reason to believe it's impossible, especially if AI software development has advanced to the point where almost all other programming is trivial.

World four: World two, except somebody screwed up the programming on one of their robot-programming AIs such that it murders everyone instead of performing useful labor. This strikes me as another plausible outcome.

Are there possibilities I'm missing that seem to you reasonably likely?

For your point (3), I have no particular expectations or insight one way or another.

I see everyone arguing over "well if you make trans-women go to men's prison they'll get raped" vs "well if you make trans-women go to women's prison men will claim to be trans-women and then they'll do the raping", and these seem both pretty obviously true.

The core issue is clearly that-- in spite of the fact that prison inmates were only ever sentenced to prison, not to repeated rape and beatings-- we nevertheless tacitly allow (what you might think of as) these extrajudicial punishments to occur, and have never bothered to build any effective safeguards against that happening.

I joke with my wife about how if we really thought that prison rape should be part of the punishment for crimes that send you to prison, we should (1) make the judge explicitly add that to the convict's sentence, in those specific words, and (2) said judicially-mandated prison rape should be performed by a generously-pensioned and fundamentally disinterested civil servant on an explicit schedule.

It is, after all, hardly less barbaric to have that same punishment levied completely at random based on how physically strong or weak the prisoner is relative to their would-be rapists.

The value of HBD being true is basically nothing, as far as I'm concerned.

I-- and, I think, a lot of other people here-- just have an intense, instinctive flinch response to people saying things that aren't correct and when people say obvious nonsense, even if it's the most well-intentioned nonsense in the world, it triggers that flinch response. Obviously I don't say anything about it; I'm not stupid, and I value my social life.

Constrained reproduction is the stupid and unethical way to go about solving dysgenics, though-- it's never gonna happen, and if it did it would get weaponized by the people in power almost immediately against the people out of power. That's aside from any ethical considerations about involuntarily blocking people off from having kids, which are real and important.

My suggestion? Government-subsidized polygenic screening for everyone, optimizing for health and IQ, let's gooooooo

(Never solve with social technology that which you can instead solve with actual technology)

PGS technology exists today.

The latter is a more 'rigorous' approach where students are taught the fundamentals such as data types, structures, flow, interpreter vs compiler, etc first; Then they are made to write programs. These programs are sometimes gamified but not to the extent as the former[...] I consider the latter "imparting knowledge" method superior. It's more in line with all the hard sciences I have been taught and all the good programmers I am aware of claim to have been taught using this method.

I realized as an adult that I do not retain knowledge if I am given that knowledge before I have any way to apply it. I suspect I'm not alone in this; but regardless, I strongly prefer the teaching methodology where you are made acquainted with tools by being given problems which necessitate using those tools. By "tools", here, I refer to algorithms and data structures, among other things. (I think this is why, even though I loved my Algorithms and Data Structures courses, I hated Operating Systems and whatever one it was that taught us assembly language. I retained very little of those and do not count them among the good or useful courses I took.)

I'm aware that this "knowledge-first-to-use-it-later" approach is similar to how the hard sciences are taught; I hated it there as well.

My actual start in programming came from hacking around in the Civilization 4 Python codebase, where I built mods for Fall From Heaven 2 and by necessity had to learn programming syntax-- I was only formally educated in programming later. Contrary to what your argument above would predict I was by far the strongest coder in my graduating class, and went on to get a job in FAANG (where I was, in my judgement, roughly at the top 20% of programmer strength in the company.)

So I don't know the total of what my "ideal programmer education" consists of, but I'm pretty sure a big chunk of it would involve writing a self-designed mod for the game Slay The Spire.

Okay okay, hear me out, this has a number of advantages:

  1. Slay the Spire is entirely programming-first. There is no "editor" interface, as a Unity game would have.

  2. Slay the Spire modding has, as its first step, decompiling the codebase. This gets your student exposure to "the act of having to understand somebody else's extremely nontrivial code".

  3. The codebase is also written using fairly reasonable best practices, particularly for a gaming studio-- it uses polymorphism to deal with all the myriad cards and their effects, which allows you to see very intuitively how polymorphism is used in the wild and why it's valuable. (I know that in my own programming education all of our programs were trivial enough that interfaces and abstract classes seemed weird and pointless, and none of my instructors could give what felt like adequate explanations for their use.)

  4. You can get something pretty cool out the other side-- a game mod! Having something cool and nontrivial that you're in the process of building is worth any number of credit points in inspiring motivation to actually learn programming.

  5. It's Java, which is a very standard programming language which features automated memory management.

So I think if I were designing a programming practicum it would feature game-modding as a big part of it, with perhaps some Code Combat or similar coding game in the first couple of weeks to familiarize students with the basic syntax and philosophy around programming in some reasonably entertaining format. And, of course, some problem sets later that showcase situations where students are given no choice but to use the standard data structures and algorithms.

Agreed that C and C++ bloooooow as starter languages. You want something with reasonable error messages and stack traces. And good IDE support-- I think statically typed is actually lower-frustration than dynamically-typed while learning because the compiler tells you if you've fucked up in a particularly obvious way before even running the program.

EDIT: Also if I never again have to write a conversion function between (pick any two) char *, wchar_t *, _bstr_t, CComBSTR, CStringA, CStringW, basic_string, and System::String it'll be too soon.