site banner

Wellness Wednesday for February 14, 2024

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1
Jump in the discussion.

No email address required.

AI Video Gen just leapt forwards

Ay yo just look at this shit. Shame it's an OAI product and thus utterly neutered by the time it makes it to us plebs.

Increasingly clear that (at least in the first world) the future of human civilization, (barring collapse / elysium / manna scenarios) is living the lifestyles currently enjoyed by aimless funemployed rich kids. As someone who knows many, it’s not a bad life, but it is often an unfulfilling one.

I suppose CGI animators are next in line for the chopping block, we’re only a year of two from full Pixar movies being created with (AI-assisted) prompt engineering by one to two people.

we’re only a year of two from full Pixar movies being created with (AI-assisted) prompt engineering by one to two people.

Gosh, I hope so. But I must admit I'm skeptical. Stable Diffusion was released to the public in August 2022, about 1.5 years ago, and though the progress in those roughly 18 months have been amazing, it's still nowhere near the level of being able to create an image equivalent of a Pixar movie (I dunno, maybe a 30 page comic book with coherent characters and a plot that loosely follows the 3-act structure?) just with prompt engineering. I do think we'll see even faster progression in the next 2 years, but going from even the impressive stuff we have now to a full coherent 90-minute film seems sufficiently difficult that it would still need a lot of actual industry experts making edits and putting them together.

My prediction in 2 years would be that an amateur studio with 1/10-1/100 (but probably not a smaller fraction?) the manpower and resources that Pixar has today could make the equivalent of a Pixar film. Highly speculative, of course. But I really do hope you're right, and we enter a world where a couple of professionals could just use prompts to generate Pixar-equivalent films - and ideally this would imply that amateur individuals with little expertise could generate 10-20-minute videos with professional, if not necessarily Pixar-level, production values (though in the realm of AI, the way "production values" manifest themselves will be different, since AI is really good at some things that CGI has trouble with, and vice versa).

In addition to us developing new techniques to prepare for deployment, we’re leveraging the existing safety methods that we built for our products that use DALL·E 3, which are applicable to Sora as well.

Yep, that's DoA, DALL-E's built-in filter is infamously hair-trigger even for non-risque things, besides the model itself having a semi-poisoned dataset for certain things like anime artstyles. I predict Sora's capacity to generate people being even worse than that of current models, there's a reason they mostly showcase heckin cute puppers and shit.

On a related note, it's getting very tiresome how my excitement for new advances in AI tech ("holy shit this is insanely cool wtffffff") is near-immediately soured by the reality of its applications ("I can scarcely begin to fathom how cucked the pleb-facing version will be"). This is more or less a me problem but I can't be alone in thinking this, it's not even so much that I personally feel cucked by not being able to gen e.g cute girls doing cute things, it's more like here is this insanely creative technology, it's pretty cool right, let us proceed to do absolutely fucking nothing with it because letting plebs have fun is too problematic in the current year, your superiors know what's better for you, no fun allowed, get back to your wage cage you fucking rube. We live in a society, etc.

I know I sound like a curmudgeon and say nothing constructive, technically they can do whatever they want with what they themselves developed, but I am drunk, sorry incredibly tired of this safetyism mindset, even after getting thoroughly desensitized to non-kosher uses of generative AI after a year in the company of /g/entlemen (whose existence technically proves it's not as bad as I paint it, but still).

On a lighter note, experts say.

I'm not thinking as much about inevitable filters as I'm thinking of how there's gonna be maybe 2-3 months of fun you get out of it and then social media just gets flooded with samey-samey videos with the "smell of AI" on them, like with images.

On that note, I'm honestly impressed and partly relieved by how quickly people develop a "sense" for AI-generated things - image, text, and soon likely video. It also reinforces my belief that whatever the eventual AGI/ASI may be, it will not be a master persuader with infinite charisma like some people seem to believe, we'll already be reasonably hardened by years of psyops before it comes into play.

What worries me is the possibility that people will sour on 'pretty' AI-generated art/text/video the way they soured on realism in the 20th century. The 'real art is shocking' brigade are too powerful already.

It looks close to as good as it can get. Everything beyond this, a model needs more than just information from videos and more instruction than a text prompt to increase in usefulness. I can't believe temporal consistency was solved this soon, I thought it would take another year minimum. But I guess that's because other companies video generation is just so bad. Googles whiff on Gemini and now this really cements that OpenAI is easily more than a year ahead of the competition.

Dunno how to feel about this. On the one hand, I might be able to tell some stories I want to tell with short movies.

On the other hand, it really seems to be over for artcels. :/

...this is a link to a three-year-old boston dynamics robot video. Is that intentional?