This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
AI artistic successes are indicative of survivorship bias. The way their creators operate is by spamming vast amounts of works and seeing what sticks. Through quirks of fate, a few of them end up successful. This business model is probably short lived, though, as the very spam it relies on degenerates the platforms necessary for their proliferation, so that user interest will eventually decline. Already we’re seeing sites like Deviant Art and Literotica killed off by AI spam. AI will kill off markets rather than improve them.
Human artistic successes are indicative of survivorship bias. AI just makes this more visible because the productivity is so much higher.
It also amplifies the effect through the amplified productivity. That is, you can achieve greater success with a lower mean quality, because instead of having a thousand humans write a thousand works and then pick the best one, you can write ten million AI works and then pick the best one, allowing you to select more standard deviations up. Which means that there will be literal millions of AI slop work of very low average quality just in the hope that one will rise to the top.
This makes discovery a lot harder and waste more time from pioneers reading slop in order to find the good stuff.
I’m not a ‘math wizard’, but something about this seems off. Shakespeare didn’t write one hundred plays and then choose the best few dozen to publish. He developed his playwriting ability until he was at a skill level to consistently put out good content. If AI lacks the underlying skills to create good works, then should we expect even one in a trillion of its works to be as good as Macbeth, or should we regard the creation of such a thing as physically impossible unless underlying conditions are met? It seems like it’s less a matter of statistical probability than physical possibility.
Survivorship and selection bias works on the population level as well as the individual work level. How many hundreds or thousands of playwrights existed in Shakespeare's time? And yet most are forgotten, while the best of the best (Shakespeare's works) are what are remembered and enjoyed and studied.
Also, there definitely is variation within an individual author's works. How much time and effort do people spend studying "Two Gentlemen of Verona"? Is it actually a good work? Personally I haven't read it, but given how little it's talked about or ranked on people's lists, my guess is that it's mid and the only reason anyone ever talks about it at all is because Shakespeare is famous for his other plays. That is, Shakespeare wrote 38 plays and, while his skill was well above average, and therefore his average work is higher than the average play, they're not all Hamlet. But one of them was. He didn't write a hundred plays and then only publish the best, he wrote 38 and then published them all and then got famous for the best few (which in turn drove interest in the rest above what they actually deserve on their own merits).
In-so-far as AI is likely to vary less in "author" talent since whatever the most cutting edge models are will be widely copied, we should expect less variance in the quality of individual works. But there will still be plenty of variation, especially as people get better at finding the right prompts and fine-tuning to create different deliberate artistic styles (and drop that stupid em-dash reliance).
I tentatively agree that there are limits to this. If you took AI from 5 years ago there is no way it would ever produce anything publishably good. If you take AI from today I don't think it could ever reach the upper tier of literature like Shakespeare or Terry Pratchett. However this statistical shotgun approach still allows one to reach above their station. But the top 1% of AI work today might be able reach Twilight levels, and if each of those has a 1 in million chance of going viral and being the next Twilight, then you only need to publish a hundred million of them and hope you get lucky. Clearly we've observed that you don't need to be Shakespeare in order to get rich, its as much about catching the public interest and catering to (and being noticed by) the right audience as it is about objective quality, and that's much more a numbers game.
I do think that AI lacks the proper level of coherence and long-term vision to properly appeal to a targeted audience the way something like Twilight or Harry Potter does. But a human curator (or possibly additional specialized AI storyboard support) could probably pick up the slack there (although at that point it's not quite the shotgun approach, more of a compromise between AI slopping and human authorship, and mixes the costs and benefits of both)
Shakespeare and his contemporaries had to pay an upfront cost that was significantly higher. They had to write (already a rare skill) a manuscript and had convince at least one theatre manager to read their work. This means their innate skill had to be high enough that their first (or second, or third if they were persistent) play was of sufficient quality already.
A modern ShAIkespeare can produce and publish a new play every weekend. We need Lord StrAInge's Men, a troupe of AIs that can read, review and dismiss AI slop just as quickly as it's written instead of relying on avid human readers.
An AI that can accurately identify and dismiss slop is 90% of the way towards producing quality content, since you could just build the generative AI with that skill built in (and train them on it).
Which is to say, maybe in 10 years this will be a mostly non-issue. If they reach the point where they can generate thousands of genuinely high quality and entertaining stories, I'll happily consume the content. I think "human authorship" as a background principle is overrated. It has some value, but that value is overrated in comparison to the actual inherent value of the work. The problem with slop is that it's not very good, regardless of whether it's generated by humans or AI. Once it's good then we're good.
Not if the process itself is beyond the AI to recreate.
For instance, say that a great movie like A Clockwork Orange was made in part through the theoretical understandings the main actors had developed over their lifetimes for their crafts and used to feed into their decisions of how to act and portray their characters.
Coming up with a similar quality of acting might be impossible through mere observation and mimicry of what works and what doesn't. The AI has an intuition for what sorts of things generally go together, but it doesn't use, among other things, underlying theoretical know-how to construct its outputs.
My current assessment is that there's a low ceiling for how far AI 'thinking' can take the quality of its output, particularly regarding the complexity of what it's attempting to do. Projects that require a layered approach of various theories and techniques seem like they're fundamentally beyond AI. The more systems that need to come together to create a work, the more exponentially difficult it becomes for a pattern-finder to match its quality. The pattern-finder needs to become capable of wielding tools, systems, theories in its thinking in order to up its game past a certain point.
I've heard people say before, in the context of AI art, that humans are essentially just 'pattern finders', too, and so are creatively indistinguishable from AI. But I think this is wrong: it ignores external tools humans use to structure and create their work, such as theories and techniques, which cumulatively take the load off of them having to conceive everything in a fit of genius. I think this is the primary reason AI, despite its 'brilliance' as a search engine or generalist explainer, is so lacking in certain other regards. It's due to the total reliance of its 'cognition' on what, compared to humans, would be more like a single sub-process.
Why would you think this? Every year it gets better at this sort of thing. Clearly, it is beyond the level of current AI, but I don't see how you make the leap to "fundamentally beyond" when this seems like exactly the sort of thing that you could do by explicitly layering various theories and techniques together. Maybe you have 20 different sub-AI each of which is an expert in one theory and technique and then you amalgamate them together into one mega AI that can use all of those techniques (with some central core that synthesizes all of the ideas together). I don't know that that's definitely possible, but I can't see any evidence that it's "fundamentally" beyond AI just because they can't do it now. A couple years ago AI couldn't figure out prepositions like putting a cat on top of a horse vs putting a tattoo of a cat on a horse and people said that was "fundamentally beyond AI" because they've never encountered the real world and don't understand how things interact, but now they can usually do that. Because they got better.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link