@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

OH HELL YES MY HOBBY HORSE, thank you for pinging me

I actually had a completely separate post that I was just going to throw in main which made essentially the same points as you made here. In particular, Shutterstock gives absolutely no clue whatsoever what the genuine Shutterstock value add would be. Like, as a customer why on earth would I ever go generate an AI image on Shutterstock using DALL-E when I could just use the DALL-E 2 API? Their editing tools? If that's what they're banking on, i'll just note that if Shutterstock wants to compete with Adobe in generative content creation, they... actually i don't know how to finish that sentence because it seems self-evidently like a terrible terrible idea.

Other notes:

The DALL-E integration will be available sometime in the "coming months." Crucially, Shutterstock will also ban AI-generated art that wasn't produced through OpenAI's platform. That will protect the companies' business models, of course, but it will also ensure that Shutterstock can identify the content used and pay the producers accordingly. Payments will arrive every six months and include revenue from both training data and image royalties.

Lol @ “crucially”. A ban on non-Shutterstock-sponsored AI art seems like transparently a non-functional fig leaf given that (1) there’s no method even in principal of checking whether a piece of art is AI-generated, and (2) adobe’s announced integration of AI with their products means that there will soon no longer be any kind of hard-and-fast distinction between “ai art” and “not ai art”. You know: “AI art? Oh, no, you misunderstand entirely, I made this myself using Adobe Illustrator.”

As an aside: this article gets a primo place in the Shutterstock blog. You will of course notice there is no corresponding article in the OpenAI blog, since OpenAI does not give a shit about this partnership except in the sense that it marginally pads their coffers if it works, and if it doesn’t, hey, it’s not their problem. whoops, missed the part where they were providing training data.

I 100% don't get why this protects the Shutterstock business model as opposed to burning a whole bunch of money on developing an API integration that's strictly inferior to every other possible way of accessing that API.

EDIT: On reflection I should not have referred to the customer using the Shutterstock site to access DALL-E 2, since the plan seems clearly to sell the DALL-E 2 generated images as stock images (where the artist is the one using DALL-E 2). Which also seems... pointless, as a customer. Why would I want to buy limited rights to an image an AI generated when I could generate one myself for free? And why would Shutterstock have any advantage in vending out such AI-generated images as opposed to a random hypothetical AI startup?

Their plan seems clearly to exist in the very very narrow gap between "I want something complex and specific, I'll use Adobe Illustrator" and "I want something straightforward, I'll just use a generative image directly". This gap only narrows over time.

EDIT EDIT: My understanding right now of how art copyright works is that if you use an image you don't own the rights to, the enforcement mechanism for that is the artist coming out of the woodwork and demanding money, with proof of some kind that she created the image. I do not know what the plausible enforcement mechanism for AI art is even if it's theoretically problematic from a copyright perspective. Is a judge gonna grant you a subpoena to get the chain of custody for the image so you can verify you have the right to sue over it? What does that conversation sound like? "You can see it's AI! Just look at the hands!"

EDIT EDIT EDIT: On reflection right now the Shutterstock curation process (so you only get to see the good generations) does represent a concrete value add, but one that decreases in value over time as image generation products get better.

Rent-seeking might be too strong-- the legal insurance aspect of their work was legitimately valuable, given the total inability for anyone to validate ownership of any artwork. It's just we're rapidly moving to a regime where it's not valuable and i can't find anything in their quarterly reports or press releases indicating awareness of that fact or of any necessity to pivot. I think they are still in the mode of thinking AI art will forever be garbage.

Good catch on microsoft adding Dall-E to office. Hadn't heard about that one.

https://www.eenewseurope.com/en/openai-backs-norwegian-bipedal-robot-startup-in-23m-round/

Quite aside from the god-inna-box scenario, OpenAI wants to give its AIs robot bodies.

sci-fi scenario

My dude, we are currently in a world where a ton of people have chatbot girlfriends, and AI companies have to work hard to avoid bots accidentally passing informal turing tests. You best start believing in sci-fi scenarios, To_Mandalay: you're in one.

Reality TV definitely gets a pass from these dynamics since there are no writers who can get flak for representation decisions; as a result, you not only see trans people on reality tv, you also see obese and extremely dim people.

EDIT: Additionally, "he's actually trans and we just didn't mention it" is entirely legitimate if you're talking about a real-life person but considered cheap and shallow to do offscreen for a fictional character. See also when JK Rowling claimed that Dumbledore is actually gay.

I think that's basically reasonable. There is some plot stuff in Terminator which is less realistic or sensible that I'm not keen on arguing, but I feel 100% reality fidelity is unnecessary for Terminator to be an effective AI x-risk story showcasing the basic problem.

I get the impression that most of the pushback from alignment folks is because (1) they feel Terminator comparisons make the whole enterprise look unserious since Terminator is a mildly silly action franchise, and (2) that the series doesn't do a good job of pointing out why it is that it's really hard to avoid accidentally making Skynet. Like, it's easy to watch that film and think "well obviously if I were programming the AI I would just tell it to value human well-being. Or maybe just not make a military AI that I give all my guns to. Easy-peasy."

I think it's mainly the first one, though. It's already really hard to bridge the inferential distances necessary to convince normal people that AI x-risk is a thing and not a bunch of out-of-touch nerds hyperventilating about absurd hypotheticals; no point in making the whole thing harder on yourself by letting people associate your movement with a fairly-silly action franchise.

For my money, I like Mickey Mouse: Sorcerer's Apprentice as my alignment fable of choice. The autonomous brooms neither love you nor hate you. But they intend to deliver the water regardless of its impact on your personal well-being.

Disney's Fantasia: way ahead of its time.

Yup. Primary reason the anti drug rules are important is because with them pros will ride the razor's edge of discoverability; without them they will ride the razor's edge of ODing or death.

I'm conflicted about this; on the one hand, international relations are disintegrating all over what with Russia and China events, and we can expect this to cause even further mass disruption in the economy. On the other hand, large language models seem to be the real deal in terms of AI taking over more and more low-skill tasks, and that's going to unlock a huge amount of productivity as we continue to scale up. This would be mostly in the US where all of this is taking place.

I do not believe the vast majority of major economic actors are particularly tuned-in to all the crazy shit going on in AI and why it matters; this is evident from, for one thing, the fact that neither third-party nor first-party analyses of Shutterstock (hobby horse of mine, I know) do not even mention AI as a plausible risk factor in the coming year in spite of the fact that groups are already successfully using AI-generated images as a stock image replacement. Admittedly instances of this aren't frequent, yet, but I'd be shocked if this didn't change in the coming 1-2 years, especially if we do see a depression (leading to cost-cutting across the board.)

That makes me believe even very-obviously-incoming AI advances are not actually priced into most economic indicators, including stock prices. Not sure whether, on net, we can expect economic indicators to improve or degrade going forward given all these facts.

Honestly, same. I hear about all these instances where Shutterstock/Getty Images sue random uninformed people on the internet for shitloads of money whenever they sense a violation of one of their stock image copyrights, and I think to myself, you know, maybe this business model should be burned to the ground. And the earth salted so that no such business model can ever grow again.

I see everyone arguing over "well if you make trans-women go to men's prison they'll get raped" vs "well if you make trans-women go to women's prison men will claim to be trans-women and then they'll do the raping", and these seem both pretty obviously true.

The core issue is clearly that-- in spite of the fact that prison inmates were only ever sentenced to prison, not to repeated rape and beatings-- we nevertheless tacitly allow (what you might think of as) these extrajudicial punishments to occur, and have never bothered to build any effective safeguards against that happening.

I joke with my wife about how if we really thought that prison rape should be part of the punishment for crimes that send you to prison, we should (1) make the judge explicitly add that to the convict's sentence, in those specific words, and (2) said judicially-mandated prison rape should be performed by a generously-pensioned and fundamentally disinterested civil servant on an explicit schedule.

It is, after all, hardly less barbaric to have that same punishment levied completely at random based on how physically strong or weak the prisoner is relative to their would-be rapists.

I'll point out that the problem might not be so unsolvable as you describe; prompt engineering being what it is, a very thinkable (but dystopian) way some more-capable future version of DALL-E might resolve this is by adding to the prompt "and also, make sure to never portray X ethnicity negatively."

It's more Unity's an adtech business with some game engine sales on the side; last I heard they had maybe 2/3 of their revenue coming from advertising. App Tracking Transparency savagely brutalized that business model, sadly, and I think Unity's frantically flailing around in search of a different one.

Realistically I think what happens is Unity goes up to Nintendo and says "please pay me 100 million dollars for all these installs I see using my proprietary methods" and Nintendo is like "i will counteroffer with this Twix bar" and Unity looks at the Twix bar, compares it to the prospect of a lengthy and expensive trial vs the Nintendo legal team, and accepts the counteroffer. Repeat with Microsoft et al.

Have you considered that physical appearance is one of the most malleable things about a person, particularly for a person with a high income? I have no specific knowledge of what about you is unattractive, but you have the following options open to you:

  1. plastic surgery if it's an unattractive face or jawline or your ears stick out or whatever

  2. weight loss drugs if you're overweight

  3. testosterone replacement therapy + personal training if you have a severe lack of muscle mass. (Girls mostly really like muscle mass.)

  4. that leg-lengthening procedure if your problem is height

  5. wigs or medical hair replacement (dunno the clinical term) if you are balding.

This is an entirely serious comment. Western society has a stigma against trying to change your appearance in these ways, but if your appearance is an impediment to you living your best life, you should change it if you have the money, which it sounds like you will.

Do these have side effects? Yeah, probably. Life is full of tradeoffs. Still, given current medical tech the OP reads a bit like a (more expensive) version of "i am worried that no woman will ever love me because all of my clothes are ugly. Should i resign myself to dying alone, or just really go hard on settling?" My dude! Just buy some new clothes!

Self-acceptance is bunk. Engineer that shit away.

Holy shit, I think you could be right. This is exactly the kind of use case NFTs were made for-- ones where you need a foolproof immutable chain of transactions that can never go down.

I did not expect this thread to be the first time I hear of a use case for which NFTs appear to be the best solution.

Yes, but it has to be the exact same prompt with the exact same random seed. If someone doesn't provide you that info there is no hope of replication.

(I would actually greatly prefer that this not be a thing because I think it would be a huge expansion of the surveillance state for what feels like a deeply silly reason, but I'm tickled regardless by someone bringing up blockchain technology in order to solve a real-world use case for which it legitimately appears to be the best solution. Absolutely wild.)

The core problem here is that shutterstock provides a very specific service for a bunch of money, and ai art represents a means by which competition for that same service will very soon be totally free. Shutterstock adopting ai art or not doesn't really impact this core dynamic.

I think you are right that more websites like lexica.art will crop up, it's just i expect those to be free and ad-supported and not huge moneymakers.

The value of HBD being true is basically nothing, as far as I'm concerned.

I-- and, I think, a lot of other people here-- just have an intense, instinctive flinch response to people saying things that aren't correct and when people say obvious nonsense, even if it's the most well-intentioned nonsense in the world, it triggers that flinch response. Obviously I don't say anything about it; I'm not stupid, and I value my social life.

Constrained reproduction is the stupid and unethical way to go about solving dysgenics, though-- it's never gonna happen, and if it did it would get weaponized by the people in power almost immediately against the people out of power. That's aside from any ethical considerations about involuntarily blocking people off from having kids, which are real and important.

My suggestion? Government-subsidized polygenic screening for everyone, optimizing for health and IQ, let's gooooooo

(Never solve with social technology that which you can instead solve with actual technology)

Hey, there's a reason I'm drawing this comparison here rather than, say, /r/politics.

Though I dislike the characterization that "merely feeling better about yourself" is something frivolous and unimportant. I do agree with you that trans advocates would absolutely object to my characterization above, but I think this is basically just respectability politics; people can and should reshape their body as much as technology allows to suit their desired aesthetics.

The fact that trans advocates would be likely to find the parallel unflattering I think more speaks to societal puritanism around self-modifying your appearance than it does the parallel being inappropriate.

Yeah, i'm thinking primarily of types of representation where the protected characteristics are, themselves, the problematic attributes causing the characteristic to be totally absent. Mentally handicapped folks, the obese, and visibly trans women (in non-very special episodes) are the main examples i can think of for this.

I don't think I can have an educated opinion on whether the opposition to DeFi was (a) principled advocacy for something he genuinely believed, (b) basic self-interested moves typical of big players in most industries, or (c) nefarious shit that should tank his credibility among honest folk. My money ordinarily would be on (b), but that's just priors.

On reflection I think EA as a tribal signifier has come to mean a whole bunch of different things to different people, from "we should value the lives of future people more than our own" to "maybe we should think for two seconds about cost efficiency" to "defrauding people can be good, actually" to "just donate to whoever Givewell says." This is unhelpful.

EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

I think there's this idea that utilitarianism is all like "sure, go ahead, rob people iff you can use that money better" but that's dumb strawman-utilitarianism.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative, in general resulting in a society where nobody can trust anyone well enough to coordinate (and also resulting in a society where nobody would want to live).

There is a reason why nobody on the EA side is defending Bankman.

Some entertainment workers deleting tweets and so on under CCP's pressure suggests that it's plausible that the world's richest man may turn his company into an asset of a foreign propaganda?

Let's make this clear, he's the world's richest man for as long as Tesla's doing well.

It asks them to inject themselves and to go on said trips, and they say "okay!"