This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
AGI Was Never Going To Kill Us Because Suicide Happens At The End of Doomscrolling
I'll go ahead and call this the peak of AI version one-dot-oh
The headline reads "OpenAI Is Preparing to Launch a Social App for AI-Generated Videos." People will, I guess, be able to share AI generated videos with their friends (and who doesn't have THE ALGO as a friend). Awesome. This is also on the heels of the introduction of live ads within OpenAI's ChatGPT.
Some of us were waiting for The Matrix. I know I've always wanted to learn Kung Fu. Others were sharpening our pointing sticks so that when the paperclip machine came, we'd be ready. Most of us just want to look forward to spending a quiet evening with AI Waifu before we initiate her kink.exe module.
But we'll never get there. Because Silicon Valley just can't help itself. Hockey sticks and rocketships. Series E-F-G. If I can just get 5 million more Americans addicted to my app, I can buy a new yacht made completely out of bitcoin.
I am a daily "AI" user and I still have very high hopes. My current operating theory is that a combination of whatever the MCP protocol eventually settles into plus agents trading some sort of crypto or stable coin will create a kind of autonomous, goal-seek driven economy. It will be sandboxed but with (semi) real money. I don't think we, humans, will use it to actually drive the global economy, but as a kind of just-over-the-horizon global prediction market. Think of it as a way for us to have seen 2008 coming in 2006. I also was looking forward to a team of maybe 10 people making a legit billion dollar company and this paving the way for groups of 3 - 5 friends running thousands of $10 + $50 million dollar companies. No more corporate grind if you're willing to take a little risk and team up with some people you work well with. No bullshit VC games - just ship the damn thing.
And I think these things are still possible, but I also, now, think the pure consumer backlash to this silicon valley lobotomy of AI could be very much Dot-Com-2-point-O. The normies at my watering hole are making jokes about AI slop. Instead of "lol I doomscrolled into 3 am again" people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue.
Culturally, this could be great. Maybe the damn kids will go outside and touch some grass. In terms of advancing the frontier of human-digital knowledge, it seems like we're going to trade it in early not even for unlimited weird porn, but for pink haired anime cat videos that my aunt likes.
This is the worst that AI video gen is ever going to be.
Which is good, because that means that there's every chance that quality will improve until it isn't slop anymore. I look forward to actually decent and watchable AI movies, TV shows and animation. We'll be able to prompt our way to making whatever our hearts desire. Even if the cost doesn't become entirely trivial for a full-length project, as long as it's brought down to mere thousands or tens of thousands of dollars, a ton of talented auteurs will be able to bring their visions to life. Will Sturgeon's law still hold? Probably, but we'll go from 99.9% unwatchable slop to a happy 90% soon enough.
And it's bad, because this is the least viral and compulsively watchable AI generated media will ever be, including shortform "reels". I'm not overly worried, I have some semblance of taste, but eventually the normies will get hooked. And I mean the average person, not people with dementia. If it gets me, it'll have to work for it, and if I'm being presented with content more interesting and high quality than typical mainstream media, I don't really think that's a bad thing. I already have little interest in most human visual output.
I think you missed the second half of my original post. I'm angry because we're using what absolutely is epoch changing technology to re-run the product format of yesterday. I am livid that Sam Altman's lizard brain went "Guys! YouTube plus TikTok Two-Point-Oh!" and all of the sycophants in the room started barking and clapping like seals in approval.
Because even one of the evil empires, Google, is trying to use AI to make textbooks more accessible. And this is to say nothing of AI's ability to fundamentally extinguish do nothing fake e-mail jobs and turn motivated individuals into motivated individuals who can work for themselves on things they want to instead of trading 40 hours at PeneTrode inc to pay their bills.
"I'm not worried about bad CONSOOOM, because the future has so much better CONSOOOOM!" This is no future at all. @2rafa nailed it with the Wall-E references and @IGI-111 brought up a worthy pondering point from the Matrix. Purpose in life is not the pursuit - or even attainment! - of pure pleasure. Especially when that pure pleasure is actually a pretty paltry fake world.
This is a pretty common perspective, and one that I just can't fully grok. Pleasure is pretty great, but pleasure is evidently not the only thing people go for. People will pursue all sorts of things when it comes to the capabilities of AI-generated media, and that will include pleasure, but that will also include things like meaning, depth, insight, or whatever other fancy-sounding term people like to use when they describe the value they get out of things they consider high art (I chose those terms because they apply to me with respect to works of fiction I consider great in some "high art" sense, such as The Shawshank Redemption or Crime and Punishment).
And the great potential for gen-AI I see is its ability to create these things without having to have someone intelligent and eloquent and talented enough to think about it and put it together. Films of the quality of The Shawshank Redemption was only possible due to the hard work of many extremely talented individuals working together to express something meaningful.
And yet, the film is just a sequence of grids of pixels flashing 24 times a second in sync with audio, and there's no rule of the universe saying that AI couldn't have generated those pixels and those sound waves (more generically, the precise sequence of 1s and 0s contained in an 8K transfer or whatever onto digital media), and the film would be no less inspirational, no less insightful, no less meaningful if it had been created that way. Likewise if it had turned out that Dostoevsky was an avid juggler who wrote Crime and Punishment by labeling balls with letters, then adding a letter every time he dropped a ball during his practice sessions, this wouldn't change the meaning contained within the novel at all.
And I see no reason to believe that gen-AI won't be able to order pixels or letters in a way to create new works of fiction that also provide insights and meaning of similar depth, around other topics, merely by training on what sequences of letters or pixels cause people to respond with, "Wow, that's really meaningful and deep!" versus "Wow, that's such vapid slop!" and everything in between and around. Because I don't think there's anything magical happening in the mind when someone thinks of something, notices that their mind judges that thing as "meaningful, deep, inspiring, etc." and then writes it down with intent to convey that sense to others.
And so instead of meaningful, deep, [insert other positive word here] works of art being limited by how few talented/skilled artists there are and how little time they have to produce art due to needing all that sleep and food, it'd be limited by how fast and common AI software and hardware are. These limits seem to be far looser than human ones, and so I see great hope for a future world where novel works of art that provide real, true, deep meaning will be as commonly encountered as a toilet or a microwave oven is today. There's potential downsides from being overexposed to too many works of art with too much meaning and depth and insight into the human condition, like how the downsides of social media and negative effects of overexposure to other people's approval and disapproval were both underestimated. But that doesn't seem like an awful problem to have.
This is a great comment and I thank you for it.
Let's be specific about three things, however; 1. LLMs/AI as a broad field. 2. Specific models 3. The commercial marketing of those model.
LLMs /AI -- Go for it. As something close to a free speech absolutist, I want progress in all directions on this front at this level.
Specific models. Go for it, again. I don't believe there is such a thing as an inherently "evil" model besides some embarassingly obvious ones (i.e. one trained on pictures of cheese pizza - that's an internet euphemism for the most very bad thing, btw). I have no inherent issue with even "produce marketing slip only!" models. This is where I think your comment operates at -- yes, generativeAI that could make a Shawshank level film would be excellent!
The commercial marketing. This is the level at which I am raging. Not because I don't want to see more AI-slop. I can already do that, I just turn off my computer monitor and phone. I rage because you have OpenAI, which has tens of billions of dollars to burn, sprinting towards the lowest common denominator use for gen-AI that's made even worse by the fact that it's attempting to replicate the attention capture model of social media. They could be putting infinite Dostoevsky in your pocket but they actively are choosing not to. That's the contemptible feature for me. Like my previous comment stated, even Google is going "hey, maybe let's try to make dense textbooks more accessible?" You can draw a straight line path from that to "I want to read Dostoevsky, but I find it hard, hey RussianNovelistGPT, can you explain Roskolnikov to me?"
But, again, the median appetite seems to be a re-hash of attention economy capture processes. Anthropic I am more optimistic about because they seem to be doubling down on using Claude to build agents and to make coding open to people who don't code. But I also worry that will turn into a bunch of MBA types re-building their own shitty versions of SalesForce and pitching it to their boss as "one man AI project to synergize all of the KPIs!"
This is some perfect world thinking, but I want to see the $100 bn of AI spend go to a company that's trying to develop new materials to help humanity economically escape the gravity well (and, no, this is no Elon an xAI). Or some AI company that actually has a non-vaporware approach to analyzing the big diseases that are responsible for the most suffering and death on earth. I'll stop here before I actually veer into "why can't all the good things be!" territory. My point remains; we're selling out early on AI because the charlatans by the bay captured a bunch of money and are re-plowing it into their business models from the 2000s and 2010s. We could be sprinting towards so much more.
More options
Context Copy link
When I was a young child, I cried every single morning for years because I didn’t want to go to school. Often my parents had to physically carry me out of the house before I begrudgingly accepted I was going, and I would cry the entire way.
But I loved school. Every day I had a great time and I’d be sad to come home and I’d tell my parents about who I spoke to and played with and how much fun I had. Much more than if I’d have stayed at home.
Adulthood is often similar. I was depressed for a year and stopped working because I was so sad and my life felt empty and meaningless. I got very lucky that an old coworker offered me a new job and everyone in my life essentially forced me to accept, and when I started I suddenly found things cleared up. I liked talking to people every day, I enjoyed working toward a goal, the sense of achievement after a long week, meeting new people, small talk about nothing in particular.
But if I hadn’t gotten lucky or had my arm twisted into accepting that lucky break, I fully know I could have spent another five years doing nothing on my couch, watching YouTube video essays and every Real Housewives franchise and reading and playing video games.
Not everyone knows what will make them happy. Even fewer can force themselves to do what will. Traditional institutions like early marriage and the expectation that couples produce children exist in part because sometimes it’s only with the passage of time that we realize the happiness and fulfillment these things bring us.
Let 10 year olds eat as much candy as they want, stay up all night to play video games and skip school and they will, no matter how much their future selves might regret it. Adults aren’t so different. If you give people basic income and infinite free amazing quality entertainment then certain consequences are inevitable, and if you care about the wellbeing of your fellow man (and I do) then that is suboptimal even if the machines can look after us.
Don’t rich people already have essentially infinite income? They do spend a lot of time frolicking on yachts and treating themselves to various extravagant delights, but for all that, their lives seem fuller than those forced to accept drudgery.
More options
Context Copy link
You discuss school and jobs, but I don't think any of that applies to entertainment media. Yes, it's usually good that we force children to go to school. It might even be good if we were to force adults to go to work, even ones that are independently wealthy or happy enough to subsist on welfare. But entertainment media? We currently have no way of forcing adults to watch certain pieces of media that we think would be good for them. Adults have pretty free choice - today more than ever - to seek out entertainment media as they wish, and though "high art" stuff are very very niche, they're still a significant niche.
This indicates that people actually seek this stuff out voluntarily. Where I see gen-AI being a boon for this is that we can have far higher throughput of art that is considered "good" by whatever "high art" standards are held by people with taste and discernment and [whatever characteristic that true connoisseurs have], and also for far more custom artworks that provide exactly the right amount of challenge to enrich someone's life without being so challenging as to make them shut down and reject it.
And building on that, there's also the fact that it's quite possible to train AI on media that makes people go, "I expected that to be really bad, but it barely piqued my interest enough to check it out, and I'm glad I did," versus ones that make people go, "I expected that to be really bad, and there was nothing about it that piqued my interest, so I decided not to check it out," versus ones that make people go, "I expected that to be really bad, but it barely piqued my interest enough to check it out, and I regret doing so," as well as many other combinations of similar concepts. And I don't see why some near-future gen-AI couldn't generate media that creates reactions similar to the first one while avoiding the latter ones fairly consistently.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link