@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

I feel like one core insight of cancel culture is that if you have 1000 detractors and 20000 supporters the detractors can still make your life shit in ways your supporters can't really help with (phoning your boss, doxxing you, sending rape threats, harassing organizations that have the capacity to inconvenience you, etc.)

The answer seems obvious: instead of taking someone's word for or against ai xrisk being a thing, the arguments for it have to be evaluated on their merits, and you can decide for yourself whether it is something to be concerned about based on that.

You make an interesting comparison to Photoshop, since people are already used to not thinking of Photoshop as being responsible for what people create with the app.

I guess it depends on the degree to which the LLM is perceived to be creating an image from whole cloth vs "just helping"

I really don't want to do women dirty like this but, I have yet to come across a "good" female programmer. I really don't know what it is at the root of this.

It could just be there are so few in the first place. The proportion of coworkers I have of any gender that I consider particularly good programmers is quite low, and I've had over a period of ten years roughly... three female programming co-workers?

I don't recall them being remarkably good or bad. Like most of my coworkers I would class their code as "basically serviceable."

Have you known a lot of male coworkers that you viewed as being remarkably good coders?

OH HELL YES MY HOBBY HORSE, thank you for pinging me

I actually had a completely separate post that I was just going to throw in main which made essentially the same points as you made here. In particular, Shutterstock gives absolutely no clue whatsoever what the genuine Shutterstock value add would be. Like, as a customer why on earth would I ever go generate an AI image on Shutterstock using DALL-E when I could just use the DALL-E 2 API? Their editing tools? If that's what they're banking on, i'll just note that if Shutterstock wants to compete with Adobe in generative content creation, they... actually i don't know how to finish that sentence because it seems self-evidently like a terrible terrible idea.

Other notes:

The DALL-E integration will be available sometime in the "coming months." Crucially, Shutterstock will also ban AI-generated art that wasn't produced through OpenAI's platform. That will protect the companies' business models, of course, but it will also ensure that Shutterstock can identify the content used and pay the producers accordingly. Payments will arrive every six months and include revenue from both training data and image royalties.

Lol @ “crucially”. A ban on non-Shutterstock-sponsored AI art seems like transparently a non-functional fig leaf given that (1) there’s no method even in principal of checking whether a piece of art is AI-generated, and (2) adobe’s announced integration of AI with their products means that there will soon no longer be any kind of hard-and-fast distinction between “ai art” and “not ai art”. You know: “AI art? Oh, no, you misunderstand entirely, I made this myself using Adobe Illustrator.”

As an aside: this article gets a primo place in the Shutterstock blog. You will of course notice there is no corresponding article in the OpenAI blog, since OpenAI does not give a shit about this partnership except in the sense that it marginally pads their coffers if it works, and if it doesn’t, hey, it’s not their problem. whoops, missed the part where they were providing training data.

I 100% don't get why this protects the Shutterstock business model as opposed to burning a whole bunch of money on developing an API integration that's strictly inferior to every other possible way of accessing that API.

EDIT: On reflection I should not have referred to the customer using the Shutterstock site to access DALL-E 2, since the plan seems clearly to sell the DALL-E 2 generated images as stock images (where the artist is the one using DALL-E 2). Which also seems... pointless, as a customer. Why would I want to buy limited rights to an image an AI generated when I could generate one myself for free? And why would Shutterstock have any advantage in vending out such AI-generated images as opposed to a random hypothetical AI startup?

Their plan seems clearly to exist in the very very narrow gap between "I want something complex and specific, I'll use Adobe Illustrator" and "I want something straightforward, I'll just use a generative image directly". This gap only narrows over time.

EDIT EDIT: My understanding right now of how art copyright works is that if you use an image you don't own the rights to, the enforcement mechanism for that is the artist coming out of the woodwork and demanding money, with proof of some kind that she created the image. I do not know what the plausible enforcement mechanism for AI art is even if it's theoretically problematic from a copyright perspective. Is a judge gonna grant you a subpoena to get the chain of custody for the image so you can verify you have the right to sue over it? What does that conversation sound like? "You can see it's AI! Just look at the hands!"

EDIT EDIT EDIT: On reflection right now the Shutterstock curation process (so you only get to see the good generations) does represent a concrete value add, but one that decreases in value over time as image generation products get better.

Lab leaks happen already by accident. Why would you believe it's so hard to engineer a lab leak directly given (1)superintelligence and (2) the piles of money superintelligence can easily earn via hacking/crypto/stock trading/intellectual labor?

Oh, shit, I didn't know about the Black Donald Trump thing. That's hilarious.

Yeah, okay, it's a fair cop; even such a policy as I describe would result in amazing PR debacles.

Yup. Primary reason the anti drug rules are important is because with them pros will ride the razor's edge of discoverability; without them they will ride the razor's edge of ODing or death.

Rent-seeking might be too strong-- the legal insurance aspect of their work was legitimately valuable, given the total inability for anyone to validate ownership of any artwork. It's just we're rapidly moving to a regime where it's not valuable and i can't find anything in their quarterly reports or press releases indicating awareness of that fact or of any necessity to pivot. I think they are still in the mode of thinking AI art will forever be garbage.

Good catch on microsoft adding Dall-E to office. Hadn't heard about that one.

It asks them to inject themselves and to go on said trips, and they say "okay!"

The value of HBD being true is basically nothing, as far as I'm concerned.

I-- and, I think, a lot of other people here-- just have an intense, instinctive flinch response to people saying things that aren't correct and when people say obvious nonsense, even if it's the most well-intentioned nonsense in the world, it triggers that flinch response. Obviously I don't say anything about it; I'm not stupid, and I value my social life.

Constrained reproduction is the stupid and unethical way to go about solving dysgenics, though-- it's never gonna happen, and if it did it would get weaponized by the people in power almost immediately against the people out of power. That's aside from any ethical considerations about involuntarily blocking people off from having kids, which are real and important.

My suggestion? Government-subsidized polygenic screening for everyone, optimizing for health and IQ, let's gooooooo

(Never solve with social technology that which you can instead solve with actual technology)

I see everyone arguing over "well if you make trans-women go to men's prison they'll get raped" vs "well if you make trans-women go to women's prison men will claim to be trans-women and then they'll do the raping", and these seem both pretty obviously true.

The core issue is clearly that-- in spite of the fact that prison inmates were only ever sentenced to prison, not to repeated rape and beatings-- we nevertheless tacitly allow (what you might think of as) these extrajudicial punishments to occur, and have never bothered to build any effective safeguards against that happening.

I joke with my wife about how if we really thought that prison rape should be part of the punishment for crimes that send you to prison, we should (1) make the judge explicitly add that to the convict's sentence, in those specific words, and (2) said judicially-mandated prison rape should be performed by a generously-pensioned and fundamentally disinterested civil servant on an explicit schedule.

It is, after all, hardly less barbaric to have that same punishment levied completely at random based on how physically strong or weak the prisoner is relative to their would-be rapists.

Yeah, i'm thinking primarily of types of representation where the protected characteristics are, themselves, the problematic attributes causing the characteristic to be totally absent. Mentally handicapped folks, the obese, and visibly trans women (in non-very special episodes) are the main examples i can think of for this.

morally, i feel i should be able to lose weight myself

No! Bad! The decision to take a drug is a practical one with no moral implications. Similar statements include "morally, i feel i should be able to drive a bit longer without stopping at a rest area" or "morally, i feel i should be able to walk to the grocery store rather than drive."

I think that's basically reasonable. There is some plot stuff in Terminator which is less realistic or sensible that I'm not keen on arguing, but I feel 100% reality fidelity is unnecessary for Terminator to be an effective AI x-risk story showcasing the basic problem.

I get the impression that most of the pushback from alignment folks is because (1) they feel Terminator comparisons make the whole enterprise look unserious since Terminator is a mildly silly action franchise, and (2) that the series doesn't do a good job of pointing out why it is that it's really hard to avoid accidentally making Skynet. Like, it's easy to watch that film and think "well obviously if I were programming the AI I would just tell it to value human well-being. Or maybe just not make a military AI that I give all my guns to. Easy-peasy."

I think it's mainly the first one, though. It's already really hard to bridge the inferential distances necessary to convince normal people that AI x-risk is a thing and not a bunch of out-of-touch nerds hyperventilating about absurd hypotheticals; no point in making the whole thing harder on yourself by letting people associate your movement with a fairly-silly action franchise.

For my money, I like Mickey Mouse: Sorcerer's Apprentice as my alignment fable of choice. The autonomous brooms neither love you nor hate you. But they intend to deliver the water regardless of its impact on your personal well-being.

Disney's Fantasia: way ahead of its time.

I'm conflicted about this; on the one hand, international relations are disintegrating all over what with Russia and China events, and we can expect this to cause even further mass disruption in the economy. On the other hand, large language models seem to be the real deal in terms of AI taking over more and more low-skill tasks, and that's going to unlock a huge amount of productivity as we continue to scale up. This would be mostly in the US where all of this is taking place.

I do not believe the vast majority of major economic actors are particularly tuned-in to all the crazy shit going on in AI and why it matters; this is evident from, for one thing, the fact that neither third-party nor first-party analyses of Shutterstock (hobby horse of mine, I know) do not even mention AI as a plausible risk factor in the coming year in spite of the fact that groups are already successfully using AI-generated images as a stock image replacement. Admittedly instances of this aren't frequent, yet, but I'd be shocked if this didn't change in the coming 1-2 years, especially if we do see a depression (leading to cost-cutting across the board.)

That makes me believe even very-obviously-incoming AI advances are not actually priced into most economic indicators, including stock prices. Not sure whether, on net, we can expect economic indicators to improve or degrade going forward given all these facts.

Honestly, same. I hear about all these instances where Shutterstock/Getty Images sue random uninformed people on the internet for shitloads of money whenever they sense a violation of one of their stock image copyrights, and I think to myself, you know, maybe this business model should be burned to the ground. And the earth salted so that no such business model can ever grow again.

EA does not value ownership rights; if your money could do more good somewhere else it would be positive for it to be taken from you and directed somewhere else.

I think there's this idea that utilitarianism is all like "sure, go ahead, rob people iff you can use that money better" but that's dumb strawman-utilitarianism.

The reason it's dumb is because you have to take into account second-order effects in doing whatever it is you're doing, and those second-order effects for dishonest and coercive actions are nearly always profoundly negative, in general resulting in a society where nobody can trust anyone well enough to coordinate (and also resulting in a society where nobody would want to live).

There is a reason why nobody on the EA side is defending Bankman.

I'll point out that the problem might not be so unsolvable as you describe; prompt engineering being what it is, a very thinkable (but dystopian) way some more-capable future version of DALL-E might resolve this is by adding to the prompt "and also, make sure to never portray X ethnicity negatively."

Holy shit, I think you could be right. This is exactly the kind of use case NFTs were made for-- ones where you need a foolproof immutable chain of transactions that can never go down.

I did not expect this thread to be the first time I hear of a use case for which NFTs appear to be the best solution.

Yes, but it has to be the exact same prompt with the exact same random seed. If someone doesn't provide you that info there is no hope of replication.

(I would actually greatly prefer that this not be a thing because I think it would be a huge expansion of the surveillance state for what feels like a deeply silly reason, but I'm tickled regardless by someone bringing up blockchain technology in order to solve a real-world use case for which it legitimately appears to be the best solution. Absolutely wild.)

The core problem here is that shutterstock provides a very specific service for a bunch of money, and ai art represents a means by which competition for that same service will very soon be totally free. Shutterstock adopting ai art or not doesn't really impact this core dynamic.

I think you are right that more websites like lexica.art will crop up, it's just i expect those to be free and ad-supported and not huge moneymakers.

I think it's more like pointing out that there's no particular reason the EA charities should have been able to spot a fraud when the fraud went unspotted by a huge number of highly motivated traders whose job is, in part, to spot that sort of thing (so that they can either avoid it or make trades based around its existence).

Would you require monogamy from the woman? And if so: why?