@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

What you're saying checks out. One way of putting it might be that the craft of art is helped by AI-- as an artist I can do more, on an objective level-- but the profession of being an artist is irrecoverably damaged.

it does seem fair to not want to be the test legal case for AI art

as an aside i'm curious about how much Shutterstock got paid for the training data they sold to OpenAI.

i think if it's a binary choice 50% is exactly right since if you don't know what a house is, no process of reasoning could get you better than a coin flip as to the right answer. Similar if you have N different choices where you can't distinguish between them in any meaningful way.

It's important because I find it unaesthetic to have athletes dying from ODs on PE drugs, and, more crucially, so do the people running the olympics.

that is an excellent point right up there with the thing where due to illegal drugs being illegal people will get them from street dealers, whose drugs are going to be massively more dangerous than a theoretical legal equivalent.

It's important because I find it unaesthetic to have athletes dying from ODs on PE drugs, and, more crucially, so do the people running the olympics.

seems fair. I do think there's also a thing where it's not clear yet how suing someone for using ai art works since the way it works for art now is that if you use an image you don't have rights to, the way that shakes out is the person who originally made the image sues you for damages (and they can prove they made the image because they presumably have some timestamped evidence indicating so).

But who would be responsible for noticing and suing somebody who made an image with an AI trained on copyrighted images? How would they know it was AI generated? Sure, subpoena a chain of custody for the image, fine. How are you going to get a judge to agree with you that this piece of art looks like it was generated on copyrighted images if the image itself does not contain those images? Gotta get the judge onboard to get the subpoena.

Currently you're totally right. But I'll point out that the reason it takes ten minutes is because right now AI art kinda sucks (so it takes a while to get a prompt that looks okay), and the tech only gets better from here on out.

it's possible that courts will start demanding a chain of custody for art, but I can't imagine that's terribly likely given the insane logistical challenges involved in enforcement.

I think there's also currently the quality factor-- right now AI art honestly kinda sucks and I have to go through dozens of generations to get something half-decent. I expect this to get change very quickly over the next several months, since as Gwern says, attacks only get better.

I expect this to work right up to the point where there's an economic downturn and customers look around for line items they can cut from their budget.

EDIT: Ahh, that was probably not actually right given that shutterstock's subscription plan is actually fairly reasonably priced.

Yeah, but worth considering the inconvenience involved in having to track which rights you have purchased to which media, especially if you're a small business using a bunch of them. AI art lacks this issue, since you know nobody has the rights to the image because it's unique.

And people using stock images are people who are, for the most part, running small businesses, not consumers who we might expect to be lazy.

OH HELL YES MY HOBBY HORSE, thank you for pinging me

I actually had a completely separate post that I was just going to throw in main which made essentially the same points as you made here. In particular, Shutterstock gives absolutely no clue whatsoever what the genuine Shutterstock value add would be. Like, as a customer why on earth would I ever go generate an AI image on Shutterstock using DALL-E when I could just use the DALL-E 2 API? Their editing tools? If that's what they're banking on, i'll just note that if Shutterstock wants to compete with Adobe in generative content creation, they... actually i don't know how to finish that sentence because it seems self-evidently like a terrible terrible idea.

Other notes:

The DALL-E integration will be available sometime in the "coming months." Crucially, Shutterstock will also ban AI-generated art that wasn't produced through OpenAI's platform. That will protect the companies' business models, of course, but it will also ensure that Shutterstock can identify the content used and pay the producers accordingly. Payments will arrive every six months and include revenue from both training data and image royalties.

Lol @ “crucially”. A ban on non-Shutterstock-sponsored AI art seems like transparently a non-functional fig leaf given that (1) there’s no method even in principal of checking whether a piece of art is AI-generated, and (2) adobe’s announced integration of AI with their products means that there will soon no longer be any kind of hard-and-fast distinction between “ai art” and “not ai art”. You know: “AI art? Oh, no, you misunderstand entirely, I made this myself using Adobe Illustrator.”

As an aside: this article gets a primo place in the Shutterstock blog. You will of course notice there is no corresponding article in the OpenAI blog, since OpenAI does not give a shit about this partnership except in the sense that it marginally pads their coffers if it works, and if it doesn’t, hey, it’s not their problem. whoops, missed the part where they were providing training data.

I 100% don't get why this protects the Shutterstock business model as opposed to burning a whole bunch of money on developing an API integration that's strictly inferior to every other possible way of accessing that API.

EDIT: On reflection I should not have referred to the customer using the Shutterstock site to access DALL-E 2, since the plan seems clearly to sell the DALL-E 2 generated images as stock images (where the artist is the one using DALL-E 2). Which also seems... pointless, as a customer. Why would I want to buy limited rights to an image an AI generated when I could generate one myself for free? And why would Shutterstock have any advantage in vending out such AI-generated images as opposed to a random hypothetical AI startup?

Their plan seems clearly to exist in the very very narrow gap between "I want something complex and specific, I'll use Adobe Illustrator" and "I want something straightforward, I'll just use a generative image directly". This gap only narrows over time.

EDIT EDIT: My understanding right now of how art copyright works is that if you use an image you don't own the rights to, the enforcement mechanism for that is the artist coming out of the woodwork and demanding money, with proof of some kind that she created the image. I do not know what the plausible enforcement mechanism for AI art is even if it's theoretically problematic from a copyright perspective. Is a judge gonna grant you a subpoena to get the chain of custody for the image so you can verify you have the right to sue over it? What does that conversation sound like? "You can see it's AI! Just look at the hands!"

EDIT EDIT EDIT: On reflection right now the Shutterstock curation process (so you only get to see the good generations) does represent a concrete value add, but one that decreases in value over time as image generation products get better.

Yup. Primary reason the anti drug rules are important is because with them pros will ride the razor's edge of discoverability; without them they will ride the razor's edge of ODing or death.

For the record I'm definitely not convinced that "80% +- 20% chance" is a coherent thought.

Here's a thought experiment: I give you a coin, which is a typical one and therefore has a 50% chance of landing heads or tails. If I asked you the probability it lands on heads, you'd say 50%, and you'd be right.

Now I give you a different coin. I have told you it is weighted, so that it has an 80% chance of landing on one side and 20% chance of landing on another (but I haven't told you whether or not it favors heads or tails.) If I asked you the probability it lands heads when flipped, you should still say 50%.

That's because probabilities are a measure of your own subjective uncertainty about the set of possible outcomes. Probabilities are not a fact about the universe. (This is trivially true because a hypothetical omniscient being would know with 100% certainty the results of every future coinflip, thereby rendering them, by a certain definition, "nonrandom". But they would still be random to humans.)

so that gets us to the question of how much of a difference in practice there is between "80% +-20% chance" vs "80% chance +- 0%" of a thing happening. I suspect in practice not much? Since anything that feeds into your meta-level uncertainty about a probability score should also propagate down into your object-level uncertainty of the actual thing happening.

Yeah, I'm concerned about the "destruction of the human species" angle. I've been mulling over whether in surviving timelines TSM is disproportionately likely to get destroyed by China, thereby stalling AI advancement and also plunging the world into a depression since everyone needs their stuff.

Sure. https://stratechery.com/2022/the-ai-unbundling/ did it for their article and https://www.thebulwark.com/trumps-save-america-scam/ credits midjourney for their cover art-- the latter is significant because the article has nothing to do with AI. I'd expect this kind of thing to start with small, cost-conscious, less fearful-of-controversy venues and then to accelerate to larger venues as it becomes normalized.

EDIT: I probably shouldn't use the stratechery article as an example, since it's actually about AI advancements and I figure it's better to discount that sort of article in gauging ai art acceptance.

Eh, I doubt it's anything that logical. "Pretty sure that X" is, I think, just a colloquialism whose meaning is synonymous with "roughly 80% chance of X", similar to how "I'm basically certain of X" cashes out to "roughly 98% chance of X". Do you think of these statements as being fundamentally different in some way?

I think there's an interesting phenomenon where if somebody says "I'm pretty sure X will happen" then people are like "yeah, okay, I could see that" or "nah, I don't think that's true" whereas if somebody says "I think there's an 80% chance that X will happen" people will respond with "WHOA there, look who's larping as an economist with his fancy percentage points"

I'm conflicted about this; on the one hand, international relations are disintegrating all over what with Russia and China events, and we can expect this to cause even further mass disruption in the economy. On the other hand, large language models seem to be the real deal in terms of AI taking over more and more low-skill tasks, and that's going to unlock a huge amount of productivity as we continue to scale up. This would be mostly in the US where all of this is taking place.

I do not believe the vast majority of major economic actors are particularly tuned-in to all the crazy shit going on in AI and why it matters; this is evident from, for one thing, the fact that neither third-party nor first-party analyses of Shutterstock (hobby horse of mine, I know) do not even mention AI as a plausible risk factor in the coming year in spite of the fact that groups are already successfully using AI-generated images as a stock image replacement. Admittedly instances of this aren't frequent, yet, but I'd be shocked if this didn't change in the coming 1-2 years, especially if we do see a depression (leading to cost-cutting across the board.)

That makes me believe even very-obviously-incoming AI advances are not actually priced into most economic indicators, including stock prices. Not sure whether, on net, we can expect economic indicators to improve or degrade going forward given all these facts.

So I actually saw just a couple days ago someone released a proof-of-concept that used GPT-3 to substitute for the "human" part of RLHF (reinforcement-learning-with-human-feedback), and apparently it worked rather well at avoiding really blatant Goodharting; see https://openreview.net/forum?id=10uNUgI5Kl . Given the obvious interpretability advantages of an AI whose "thoughts" are represented in human-readable English, I wouldn't be all that surprised if this kind of thing scaled way way up is how we get AGI.

So, my suspicion is that we no longer need fundamental advances for AGI, and the advances that are necessary are just in scaling. Which would be exciting if it we had any particularly robust ideas for dealing safely with actors of above-human intelligence.

There's already cases of people online claiming to have fallen in love with chatbots. Only a matter of time.

I think that (1) ai video looks about a year behind ai art and (2) ai art is about a year from being able to reliably deal with physically complex scenes with many moving parts. So 2 years?

You make an interesting comparison to Photoshop, since people are already used to not thinking of Photoshop as being responsible for what people create with the app.

I guess it depends on the degree to which the LLM is perceived to be creating an image from whole cloth vs "just helping"