@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

There is a phenomenon i notice in media but never hear named. Call it, "Representation As Inherently Problematic."

Examples: There are no mentally handicapped people or trans people on shows that are not specifically about these topics. The reasons for this for mental disabilities are fairly obvious: mental handicaps are considered intrinsically undignified. If you show a mentally handicapped person doing or saying something dumb on a show, this counts as mocking a protected group. Thus: total absence.

Similarly: If you have a trans person on a show you need to make it clear to the audience they are trans, which either requires it to be a plot point (making it a sort of Very Special Episode) or making the trans person not pass (which is undignified and thus opens the writers up to criticism.) Thus: total absence.

Similarly, morbid obesity is undignified, and the morbidly obese are close to being a protected class (being as it is a physical disability). Thus, having them on a show is undignified and opens up the writers to criticism. Thus: total absence.

Another example: land o' lakes mascot, a native American woman, gets criticism for being stereotypical, which is synonymous to being visually identifiable as a native american. So she was removed from the labeling.

Another: Dr. Seuss gets criticism for visually identifiable depiction of a Chinese villager; book gets pulled as a result.

A similar-feeling phenomenon is This Character Has Some Characteristics Of A Protected Group, Which Is Kinda Like Being A Standin For That Group, Making That Character's Poor Qualities A Direct Commentary On That Group. Examples: criticisms around Greedo and Jar Jar Binks being racist caricatures; criticisms of goblin representation in Harry Potter as being anti-semitic caricatures.

Gender dysphoria and its similarities to more general body dysphoria

So consider the /r/loseit subreddit. There are a ton of people on there who hate their appearance and would like it to be different. Consider also the community of people who get plastic surgery.

Hating your body is a very universal human experience! An experience that sucks! The interesting thing here is how the different types of "hating your body" are perceived radically differently by wider society. As in:

(1) Consensus is that weight-based body dysphoria is reasonable and you should fix it by dieting. (It can also be fixed by medication-- semaglutide/tirzepatide, in particular-- but this has not achieved widespread social acceptance.) There is also a fat-acceptance movement, but this is niche and is discouraged by obesity being comorbid with a ton of medical issues.

(2) Consensus is that age-based and (more broadly) ugliness-based body dysphoria is something you should just get over instead of addressing directly. Plastic surgery exists, but it does not have widespread social acceptance, and it is socially acceptable to make fun of women whose plastic surgeries are bad enough to be noticeable.

The common line of "cosmetic surgery won't make you feel better about yourself" is contradicted by pretty clear evidence on average; a cursory google scholar search gets us https://academic.oup.com/asj/article/25/3/263/227685 , which claims the following:

Eighty-seven percent of patients reported satisfaction with their postoperative outcomes. Patients also reported significant improvements in their overall appearance, as well as the appearance of the feature altered by surgery, at each of the postoperative assessment points. Patients experienced significant improvements in their overall body image, their degree of dissatisfaction with the feature altered by surgery, and the frequency of negative body image emotions in specific social situations. All of these improvements were maintained 12 months after surgery.

(3) Gender dysphoria has, of course, gotten a huge amount of play in the media since addressing it optimally requires surgery and hormones in adolescence, when we mostly accept that people have not yet reached their full capacity for judgement. Plus, even in rich countries bio-engineering has not reached nearly the place it would need to in order to make neogenitalia function properly, or for "passing" to be easy for transitioners.

Is the current push for social acceptance of gender-based body modification something that will spread into other kinds of artificial body modification, such as plastic surgery for appearance or medications for weight loss?

I certainly hope so!

/r/art, having a normal one:

https://twitter.com/reddit_lies/status/1610669909842825222

If you'd prefer not to click, it's a screenshot of a mod communication in /r/art where a mod, believing that a particular user had uploaded AI art, has banned the user, and the user is appealing on the grounds that he did not use AI and in fact has a large DeviantArt portfolio in basically that style. The mod in question responded:

I don’t believe you. Even if you did “paint” it yourself, it’s so obviously an AI-prompted design that it doesn’t matter. If you really are a “serious” artist, then you need to find a different style, because A) no one is going to believe when you say it’s not AI, and B) the AI can do better in seconds what might take you hours. Sorry, it’s the way of the world.

This led to a predictable backlash resulting in /r/art temporarily going private, which appears to have lifted as of today.

I suppose I don't have too much useful commentary except to note that identifying what art is or is not ai-generated is probably an unsolvable challenge in the general case, and that forum bans for posting it are definitely going to get a lot of false-positives. You could probably do a 90% solution where you require that all art be accompanied by Photoshop .psd files; no current art generation system makes these (though I wouldn't put money on future systems not generating .psd files from text prompts). Though of course such a rule stops users from uploading anything that wasn't done in Photoshop.

I anticipate this problem will very rapidly worsen since Emad (the Stable Diffusion guy) posted https://twitter.com/EMostaque/status/1610811234676346880?cxt=HHwWgMC8sZKS4NosAAAA , which supposedly is a very-soon-to-be-released system that resolves most of the worst problems exhibited by image generation systems (such as malformed hands, an inability to grasp prepositions, and warped text.)

I think the most likely explanation is that at some point in the past they were a legitimate business that ran out of legitimate funds, probably due to their known penchant for highly leveraged bets. Then they deluded themselves into believing that if they dipped into customer accounts they could gamble their way out, return the customers money, and have nobody be the wiser. Cut forward some undefined span of time and the hole gradually grew to 8 billion dollars and the whole thing collapsed.

I mostly say this because most people aren't sociopaths and this seems like the most likely route this could have happened if Bankman is not a sociopath. If he is a sociopath and planned the elaborate fraud from the start, i guess nevermind. Feels less likely, though.

Anyway, I don't think we're looking at anything more or less than a polycule of stim-abusing rationalists with a gambling problem, good PR, and access to several billion dollars with which to gamble.

I think that the main lesson here is that you can't trust people just because they use lots of ingroup shibboleths and donate lots of money to charity, even though (to be honest) that would be kinda my first impulse.

Got any examples? None spring to mind for me. Though you are right, "trans as informed attribute" would be a (ham handed) way around this.

did a control-f on this thread for the word "china" and nothing came up, so I'll just point out that before Musk took over Twitter China had no leverage over the platform to censor views they find objectionable, given that Twitter is already inaccessible in China. But Musk has a lot to lose if China were to pull their support for Tesla, since so much of Tesla's manufacturing capacity is located there.

Which means that if China were to, say, take offense at the views of people who are pro-Taiwan or anti-Xianjing-concentration-camps and wanted those views taken off of Twitter, they have a really tempting point of leverage! "That's a nice Tesla business you've got there, Musk, shame if something were to happen to it."

This is definitely the sort of thing that's already happened to other businesses over which China has had leverage-- see also https://en.wikipedia.org/wiki/Blitzchung_controversy for when Blizzard fired a bunch of people for being vocally pro-Hong Kong on stream, presumably to avoid China financially penalizing Blizzard in retaliation.

I think there's an interesting phenomenon where if somebody says "I'm pretty sure X will happen" then people are like "yeah, okay, I could see that" or "nah, I don't think that's true" whereas if somebody says "I think there's an 80% chance that X will happen" people will respond with "WHOA there, look who's larping as an economist with his fancy percentage points"

Of course, utilitarians don't believe in honesty -- it's just one more principle to be fed into the fire for instrumental advantage in manufacturing paperclips malaria nets.

There's a bunch of argument about what utilitarianism requires, or what deontology requires, and it seems sort of obvious to me that nobody is actually a utilitarian (as evidenced by people not immediately voluntarily equalizing their wealth), or actually a deontologist (as evidenced by our willingness to do shit like nonconsensually throwing people in prison for the greater good of not being in a crime-ridden hellhole.) I mean, really any specific philosophical school of thought will, in the appropriate thought experiment, result in you torturing thousands of puppies or letting the universe be vaporized or whatever. I don't think this says anything particularly deep about those specific philosophies aside from that it's apparently impossible to explicitly codify human moral intuitions but people really really want to anyway.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all. And yeah, I guess if you're a good enough liar that nobody finds out you're dishonest then I guess you don't damage that; but really, if you think for like two seconds nobody tells material lies thinking they're going to get caught, and the obvious way of not being known for dishonesty long-term is by being honest.

As for the St. Petersberg paradox thing, yeah, that's a weird viewpoint and one that seems pretty clearly false (since marginal utility per dollar declines way more slowly on a global/altruistic scale than an individual/selfish one, but it still does decline, and the billions-of-dollars scale seems about where it would start being noticeable.) But I'm not sure that's really an EA thing so much as a personal idiosyncrasy.

EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

I think there's this idea that utilitarianism is all like "sure, go ahead, rob people iff you can use that money better" but that's dumb strawman-utilitarianism.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative, in general resulting in a society where nobody can trust anyone well enough to coordinate (and also resulting in a society where nobody would want to live).

There is a reason why nobody on the EA side is defending Bankman.

Well fuck you, the burden of proof (much like in AML and foreign bribery) is on you to prove that they didn't use AI.

What constitutes proof that you made something and didn't use AI?

So something I don't really get is this:

https://www.theverge.com/2022/10/12/23400270/ai-generated-art-dall-e-microsoft-designer-app-office-365-suite

As far as I can tell the AI art generation thing has been pretty exclusively led by tiny startups; this is because unrestricted text-to-image for the masses is the mother of all adversarial environments where your AI will, regardless of the safeguards you put around it, inevitably be shown to have drawn or said something embarrassing, and if you're a tiny startup you have the luxury of not giving a shit. Not so for the big players, which is presumably why Google's never released any of their fancy text-to-image or text-to-video tech demos.

(One exception: DALL-E 2 was released by OpenAI, but they only did that after Stable Diffusion and Midjourney threatened to make it irrelevant-- that was basically a forced move.)

So. How does this not explode almost immediately in Microsoft's collective face? And why would Microsoft be leading the generative-art charge instead of Google, given Google's massive lead here?

I think a law banning AI-made images would be really really expensive and complicated to enforce, way moreso than money-laundering laws. That's because money is fungible-- one dollar is identical to another in every way that matters-- and there are only very specific parties that are allowed to create new money. These two things simplify the anti-money-laundering project dramatically.

The first way in which this simplifies things is that anti-money-laundering systems need to work with a very finite number of companies; these companies track detailed identity information as mandated by Know Your Customer laws, which enables the government to trace chains of transactions backward.

By implication if you wanted to do "money laundering laws, but for images" then every single stock image company-- and every other company that sells the rights to images-- needs to implement Know Your Customer laws. But it's actually even harder than that, because (since images are different from one another) you need a detailed audit trail for every image somebody uses in a way that you don't need for every individual dollar, which would enable anybody to verify that they actually own the rights to that specific image (and that the rights to the image were sold originally by a real person).

That means Shutterstock would need to maintain detailed identity information on every artist uploading images, as well as contact information which can never go out of date (or else they will lose their ability to confirm that any given image was actually drawn by that artist.) If any contact information does go out of date-- or if they have an outage resulting in data loss-- then instantly you have the security vulnerability of "oh, sure, John Johnson drew that picture, oh whoops I guess Shutterstock lost the info on that picture lol guess you can't verify it." And sure, you can always say "sorry bro, burden of proof's on you," but this would mean that if either John Johnson dies or Shutterstock has data loss or Shutterstock goes bankrupt (thereby losing the ability to validate image rights) everyone who ever purchased stock imagery from Shutterstock is suddenly in breach of the anti-image-laundering laws. Which would be... interesting.

The second way money-laundering is a simpler problem is that only very specific parties are allowed to create new money. This fact means that if some new money appears out of nowhere somebody has definitely committed a crime and it's (relatively) simple to figure out who-- just transfer the chain of transactions backward. If new pictures come out of nowhere that's not really a signal of anything except that artists exist, and I guess the person furthest back in the chain is the artist.

The problems needing to be solved here are actually quite similar to the problems involved in validating copyrights to a given image, which is also an unsolved problem (thus why Shutterstock has to offer legal indemnities when you purchase usage rights for an image).

I'll point out that the problem might not be so unsolvable as you describe; prompt engineering being what it is, a very thinkable (but dystopian) way some more-capable future version of DALL-E might resolve this is by adding to the prompt "and also, make sure to never portray X ethnicity negatively."

This is hardly a one-off-- there was a nearly identical incident with an NBA player (see https://time.com/5694150/nba-china-hong-kong/ ). There was also an incident with Disney putting a pro-Dalai Lama movie out which China took umbrage at, the result of which was that Disney apologized and promised never to do it again: https://asia.nikkei.com/Opinion/Disney-s-magical-thinking-won-t-keep-politics-away-from-Mulan . I have not bothered to dredge up further examples, but seems like there's a lot of them, and the net effect is even greater given that the way to avoid getting embroiled in similar scandals is to never offend China in the first place.

china having a similar level of 'influence' with many other executives and companies in america due to the very deep trade ties between us

They do indeed have that level of financial influence and it is indeed significant in practice; the fact that China's influence is felt in a huge number of other places in the US economy is not a reason to feel better about China having similar leverage over the owner of Twitter.

EDIT: from incidents where China has exerted leverage in the past the response from American politicians has not generally been anything more than worried hand-wringing. I see no particular reason it'll be different for China exerting influence over Twitter, especially if it comes in the form of Twitter algorithmically downplaying stuff China might get offended by.

Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?

Fair enough! I suppose it depends on whether you view the morally relevant action as "imprisoning someone against their will" (bad) vs "enforcing the law" (good? Depending on whether you view the law itself as a fundamentally consequentialist instrument).

That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.

I think the relevant distinction here is that not only do I not give away all my money, I also don't think anyone else has the obligation to give away all my money. I do not acknowledge this as an action I or anyone else is obligated to perform, and I believe this is shared by most everyone who's not Peter Singer. (Also, taking Peter Singer as the typical utilitarian seems like a poor decision; I have no particular desire to defend his utterances and nor do most people.)

On reflection, I think that actually everyone makes moral decisions based on a system where every action has some (possibly negative) number of Deontology Points and some number (possibly negative) of Consequentialist Points and we weight those in some way and tally them up and if the outcome is positive we do the action.

That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)

I can't think of any examples off the top of my head where the opposite tradeoff realistically occurs, negative utility points in exchange for positive deontology points.

Sure, except for when it really matters

I mean... yeah? The lying-to-an-axe-murderer thought experiment is a staple for a reason.

Yeah, but worth considering the inconvenience involved in having to track which rights you have purchased to which media, especially if you're a small business using a bunch of them. AI art lacks this issue, since you know nobody has the rights to the image because it's unique.

And people using stock images are people who are, for the most part, running small businesses, not consumers who we might expect to be lazy.

I'm not dismissing garbage collection whole sale. I'm dismissing programmers who have known nothing else.

Eh, this basically feels like a box out of the famous XKCD comic.

I'm conflicted about this; on the one hand, international relations are disintegrating all over what with Russia and China events, and we can expect this to cause even further mass disruption in the economy. On the other hand, large language models seem to be the real deal in terms of AI taking over more and more low-skill tasks, and that's going to unlock a huge amount of productivity as we continue to scale up. This would be mostly in the US where all of this is taking place.

I do not believe the vast majority of major economic actors are particularly tuned-in to all the crazy shit going on in AI and why it matters; this is evident from, for one thing, the fact that neither third-party nor first-party analyses of Shutterstock (hobby horse of mine, I know) do not even mention AI as a plausible risk factor in the coming year in spite of the fact that groups are already successfully using AI-generated images as a stock image replacement. Admittedly instances of this aren't frequent, yet, but I'd be shocked if this didn't change in the coming 1-2 years, especially if we do see a depression (leading to cost-cutting across the board.)

That makes me believe even very-obviously-incoming AI advances are not actually priced into most economic indicators, including stock prices. Not sure whether, on net, we can expect economic indicators to improve or degrade going forward given all these facts.

The details of what counts as "negative" would be determined based on the language model's own ideas of what constitutes "negative" based on its time spent with the training data. This is likely, for the most part, to align with conventional understandings of what is "negative".

Fair in general, but he is a central figure in EA specifically, and arguably its founder.

Yeah, fair, I'll cop to him being the founder (or at least popularizer) of EA. Though I declaim any obligation to defend weird shit he says.

I think one thing that I dislike about the discourse around this is it kinda feels mostly like vibes-- "how much should EA lose status from the FTX implosion"-- with remarkably little in the way of concrete policy changes recommended even from detractors (possible exception: EA orgs sending money they received from FTX to the bankruptcy courts for allocation to victims, which, fair enough.)

On a practical level, current EA "doctrine" or whatever is that you should throw down 10% of your income to do the maximum amount of good you think you can do, which is as far as I can tell basically uncontroversial.

Or to put it another way-- suppose I accepted your position that EA as it currently stands is way too into St. Petersberging everyone off a cliff, and way too into violating deontology in the name of saving lives in the third world. Would you perceive it as a sufficient remedy for EA leaders to disavow those perspectives in favor of prosocial varieties of giving to the third world? If not, what should EAs say or do differently?

Yup. Primary reason the anti drug rules are important is because with them pros will ride the razor's edge of discoverability; without them they will ride the razor's edge of ODing or death.

so that gets us to the question of how much of a difference in practice there is between "80% +-20% chance" vs "80% chance +- 0%" of a thing happening. I suspect in practice not much? Since anything that feeds into your meta-level uncertainty about a probability score should also propagate down into your object-level uncertainty of the actual thing happening.

Guys! There is a simple explanation for this that explains everything:

This is an outage. Twitter's load balancers or whatever are fucked and they can serve a small percentage of typical traffic. This is damage control, i guess to avoid acknowleging an outage?

It feels likely this is in some way related to twitter not paying their google cloud builds as has been reported by various sources.

Have you considered that physical appearance is one of the most malleable things about a person, particularly for a person with a high income? I have no specific knowledge of what about you is unattractive, but you have the following options open to you:

  1. plastic surgery if it's an unattractive face or jawline or your ears stick out or whatever

  2. weight loss drugs if you're overweight

  3. testosterone replacement therapy + personal training if you have a severe lack of muscle mass. (Girls mostly really like muscle mass.)

  4. that leg-lengthening procedure if your problem is height

  5. wigs or medical hair replacement (dunno the clinical term) if you are balding.

This is an entirely serious comment. Western society has a stigma against trying to change your appearance in these ways, but if your appearance is an impediment to you living your best life, you should change it if you have the money, which it sounds like you will.

Do these have side effects? Yeah, probably. Life is full of tradeoffs. Still, given current medical tech the OP reads a bit like a (more expensive) version of "i am worried that no woman will ever love me because all of my clothes are ugly. Should i resign myself to dying alone, or just really go hard on settling?" My dude! Just buy some new clothes!

Self-acceptance is bunk. Engineer that shit away.

Currently you're totally right. But I'll point out that the reason it takes ten minutes is because right now AI art kinda sucks (so it takes a while to get a prompt that looks okay), and the tech only gets better from here on out.