@CeePlusPlusCanFightMe's banner p

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users  
joined 2022 September 05 17:01:33 UTC
Verified Email

				

User ID: 641

CeePlusPlusCanFightMe

Self-acceptance is bunk. Engineer that shit away.

0 followers   follows 5 users   joined 2022 September 05 17:01:33 UTC

					

No bio...


					

User ID: 641

Verified Email

I'm not dismissing garbage collection whole sale. I'm dismissing programmers who have known nothing else.

Eh, this basically feels like a box out of the famous XKCD comic.

I did the More Leaders Modmod!

The coding was extremely low-quality and the Python was probably buggy as hell. But it was mine.

EDIT: Wait, I think Ashes of Erebus did end up incorporating some of my work! How's that project going, by the by?

Yeah, it's probably fair that your point deserved more care and elaboration than argumentum ad XKCD can provide. Which: sorry about that! I was overly flip.

So!

Fundamentally software is a rickety tower of abstractions built on abstractions built on abstractions. At the lowest level you've got logic gates, and if you put enough of those (and some other stuff) together in the right configurations you can make stuff like arithmetic logic units; and if you put enough stuff of basically that abstraction layer together, you have yourself a CPU, and that and some other bits gets you a computer; and then you have the BIOS, the OS on top of that, and the language runtime of the stuff you're working on on top of that, and your program running on top of that. Obviously you already know this.

And the reason this basically kinda works is that a long time ago programmers figured out that the way to productivity is to have hardened interfaces at which you program; the point of these interfaces is to avoid having to concern yourself with most of the vast underground of abstractions that form a computer. Which means that most programmers don't really concern themselves with those details, and honestly it's not clear to me they should in the typical case.

That's because making maintainable software is about ensuring that you are, at all times, programming in the level of abstraction appropriate to your problem domain, neither going higher (resulting in perf issues, typically) or lower (resulting in bugs and long implementation times as you re-invent the wheel over and over). For every guy who tanks the performance of an app by not respecting the garbage collector, there's another that decides to implement his own JSON parser "for efficiency" and hooks it up to the [redacted] API, resulting in several extremely-difficult-to-debug issues in production that I personally burned several hours in fixing, all to shave milliseconds off an hourly batch process' running time. Not that I'm bitter.

So I guess that sort of statement-- "you're only a good programmer if you've used a language with manual memory management"-- feels like unjustified programmer-machismo, where someone chooses one of those abstraction layers between bare physics and the language runtime more-or-less arbitrarily and says "ah, but only if you deeply understand this specific abstraction layer can you truly be a good programmer."

Admittedly I work in distributed systems, where 99% of things that actually matter for performance occur over the network.

I added a bunch of minor leaders, but I didn't do any of the mechanics behind Minor Leaders in general.

I... did not much like the Hamstalfar.

I really don't want to do women dirty like this but, I have yet to come across a "good" female programmer. I really don't know what it is at the root of this.

It could just be there are so few in the first place. The proportion of coworkers I have of any gender that I consider particularly good programmers is quite low, and I've had over a period of ten years roughly... three female programming co-workers?

I don't recall them being remarkably good or bad. Like most of my coworkers I would class their code as "basically serviceable."

Have you known a lot of male coworkers that you viewed as being remarkably good coders?

But the larger understanding about problems as things that need to and can be resolved internally instead of by repetition is especially important in computer programming.

I agree with this. Most of being good at coding rests on your ability to detect hidden abstractions in the business logic you're writing-- subtle regularities in the domain that can be used to write easier-to-understand and easier-to-modify code.

There's this saying: "Show me your flowcharts and conceal your tables, and I shall be continued to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious." I think that's saying something basically similar, and I think it's true.

But trying to teach how to do that seems basically similar to trying to teach someone generic problem solving, which professional educators have been banging their heads against forever.

Attempting to ban AI art directly seems obviously doomed due to the impossibility of answering the question "how do you know this piece is AI-generated." Even avoiding fraud accusations is super-easy: I assume that like two minutes after such a ban gets passed you'll see a stock photo site hosted in Argentina where it's like "yes, all of our art is human-generated, but all the humans are anonymous, 1 shiny dollar per download." Then you would use the image in your own U.S. works and be like "yeah, it came from these Argentina guys, take it up with them."

Banning AI art models seems substantially less doomed in concept, but I suspect that would be vigorously opposed by all the well-moneyed AI giants given that this ban would likely make the creation of large-scale multimodal neural networks entirely impossible.

Besides, Disney already has pretty easy ways to deal with copyright violators: sending each individual one takedown notices. They already do that today, and it doesn't seem like people are really interested in using Stable Diffusion to make tons of Mickey Mouse media; it's not obvious to me that Disney would want to provoke an expensive and unnecessary legal battle for the sake of marginally reducing the number of takedown notices they have to send.

they're entirely capable of policing whether your Argentinian logo provider actually hired a human artist or used a model

That brings us to the even more interesting question of "what constitutes proof that an artist drew this?"

"Why yes, I absolutely drew these twenty images. Did it in my apartment. Prove I didn't."

One form this "proof" can take is requiring videos to be taken of the art in question in varying stages of completion, but AI can already generate images of artwork in varying stages of completion. Photoshop it into a genuine picture of the artist and you're good to go. Capabilities right now are really impressive.

To be honest I probably don't even need an Argentinian company. Shutterstock could, right now, be unknowingly hosting any amount of AI-generated artwork, in spite of their policies to the contrary. What are they gonna do about it? Start demanding photographs of the stock photographer taking the photograph? Then photos of the validation photographer taking the photo evidence?

Basically agreed that art is not a major sector of the economy. I'm more mulling over the impacts this has on specific actors.

Adobe seems like it should do fine here, yeah. Inpainting and the like seem like they will be inevitable plugins on the core adobe offerings.

Why would shutterstock not be impacted in the short term? You think rates of improvement in ai art will slow down, or just that people won't feel motivated to realize cost savings in this way? Or that the potential legal troubles will scare people off?

Rent-seeking might be too strong-- the legal insurance aspect of their work was legitimately valuable, given the total inability for anyone to validate ownership of any artwork. It's just we're rapidly moving to a regime where it's not valuable and i can't find anything in their quarterly reports or press releases indicating awareness of that fact or of any necessity to pivot. I think they are still in the mode of thinking AI art will forever be garbage.

Good catch on microsoft adding Dall-E to office. Hadn't heard about that one.

They might, but how would you convincingly show an image to have been ai-generated?

The core problem here is that shutterstock provides a very specific service for a bunch of money, and ai art represents a means by which competition for that same service will very soon be totally free. Shutterstock adopting ai art or not doesn't really impact this core dynamic.

I think you are right that more websites like lexica.art will crop up, it's just i expect those to be free and ad-supported and not huge moneymakers.

While i don't disagree with your assessment-- that a lot of these demo images have significant flaws if you look closely-- it seems to me that imagen is clearly at a place where i would happily use it over stock photos in any context where i might actually want to use stock photos.

Yes, but it has to be the exact same prompt with the exact same random seed. If someone doesn't provide you that info there is no hope of replication.

Quality has different dimensions: sure, there's realism, but there is also conformity to subject matter, beauty, and uniqueness. It's not clear to me that realism is the most important of these by a long way.

I was wondering about that-- imgtoimg is a possibility, but it also could just be successive iteration on prompts until you get something close enough to the original. Especially for some of the more-generic images.

Only way to know for sure is having the proof contain the prompt and random seed.

Thou shalt not make a machine in the likeness of a Human mind!

Well fuck you, the burden of proof (much like in AML and foreign bribery) is on you to prove that they didn't use AI.

What constitutes proof that you made something and didn't use AI?

Posting another comment because I should have credited you that you make a good point about editorial images also being a big chunk of the stock photo business, particularly for political events. Travel guides are also an excellent use case for which you'll generally want actually-human photographers. (Though even for political events and public figures, it's not universal that this is necessary-- see https://newsletters.theatlantic.com/galaxy-brain/62f28a6bbcbd490021af2db4/where-does-alex-jones-go-from-here/ as an early prototype.)

I think a law banning AI-made images would be really really expensive and complicated to enforce, way moreso than money-laundering laws. That's because money is fungible-- one dollar is identical to another in every way that matters-- and there are only very specific parties that are allowed to create new money. These two things simplify the anti-money-laundering project dramatically.

The first way in which this simplifies things is that anti-money-laundering systems need to work with a very finite number of companies; these companies track detailed identity information as mandated by Know Your Customer laws, which enables the government to trace chains of transactions backward.

By implication if you wanted to do "money laundering laws, but for images" then every single stock image company-- and every other company that sells the rights to images-- needs to implement Know Your Customer laws. But it's actually even harder than that, because (since images are different from one another) you need a detailed audit trail for every image somebody uses in a way that you don't need for every individual dollar, which would enable anybody to verify that they actually own the rights to that specific image (and that the rights to the image were sold originally by a real person).

That means Shutterstock would need to maintain detailed identity information on every artist uploading images, as well as contact information which can never go out of date (or else they will lose their ability to confirm that any given image was actually drawn by that artist.) If any contact information does go out of date-- or if they have an outage resulting in data loss-- then instantly you have the security vulnerability of "oh, sure, John Johnson drew that picture, oh whoops I guess Shutterstock lost the info on that picture lol guess you can't verify it." And sure, you can always say "sorry bro, burden of proof's on you," but this would mean that if either John Johnson dies or Shutterstock has data loss or Shutterstock goes bankrupt (thereby losing the ability to validate image rights) everyone who ever purchased stock imagery from Shutterstock is suddenly in breach of the anti-image-laundering laws. Which would be... interesting.

The second way money-laundering is a simpler problem is that only very specific parties are allowed to create new money. This fact means that if some new money appears out of nowhere somebody has definitely committed a crime and it's (relatively) simple to figure out who-- just transfer the chain of transactions backward. If new pictures come out of nowhere that's not really a signal of anything except that artists exist, and I guess the person furthest back in the chain is the artist.

The problems needing to be solved here are actually quite similar to the problems involved in validating copyrights to a given image, which is also an unsolved problem (thus why Shutterstock has to offer legal indemnities when you purchase usage rights for an image).

Holy shit, I think you could be right. This is exactly the kind of use case NFTs were made for-- ones where you need a foolproof immutable chain of transactions that can never go down.

I did not expect this thread to be the first time I hear of a use case for which NFTs appear to be the best solution.

(I would actually greatly prefer that this not be a thing because I think it would be a huge expansion of the surveillance state for what feels like a deeply silly reason, but I'm tickled regardless by someone bringing up blockchain technology in order to solve a real-world use case for which it legitimately appears to be the best solution. Absolutely wild.)

Honestly, same. I hear about all these instances where Shutterstock/Getty Images sue random uninformed people on the internet for shitloads of money whenever they sense a violation of one of their stock image copyrights, and I think to myself, you know, maybe this business model should be burned to the ground. And the earth salted so that no such business model can ever grow again.

So something I don't really get is this:

https://www.theverge.com/2022/10/12/23400270/ai-generated-art-dall-e-microsoft-designer-app-office-365-suite

As far as I can tell the AI art generation thing has been pretty exclusively led by tiny startups; this is because unrestricted text-to-image for the masses is the mother of all adversarial environments where your AI will, regardless of the safeguards you put around it, inevitably be shown to have drawn or said something embarrassing, and if you're a tiny startup you have the luxury of not giving a shit. Not so for the big players, which is presumably why Google's never released any of their fancy text-to-image or text-to-video tech demos.

(One exception: DALL-E 2 was released by OpenAI, but they only did that after Stable Diffusion and Midjourney threatened to make it irrelevant-- that was basically a forced move.)

So. How does this not explode almost immediately in Microsoft's collective face? And why would Microsoft be leading the generative-art charge instead of Google, given Google's massive lead here?

You make an interesting comparison to Photoshop, since people are already used to not thinking of Photoshop as being responsible for what people create with the app.

I guess it depends on the degree to which the LLM is perceived to be creating an image from whole cloth vs "just helping"