This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
This was a response to @cjet79:
But I decided I would make it a top comment, because it's my second favorite subject after sci-fi bullshit: literary snobbery with a side of AI.
First, I like AI. I mean, I like it as a tool. (And yes, I know that "AI" is still a misnomer, I understand that LLMs are just token predictors, and I think people who believe that any neural net is close to actually "thinking" or becoming self-aware, or that really, what are we but pattern-matching echolaliac organisms? are drinking kool-aid). I've used ChatGPT to build applications (I don't do "vibe coding" but I have found it increases my productivity because with the right prompts it helps me use new applications and libraries faster than I could by going through tutorials and manuals). It cannot build a fully functional application (beyond the simplest) by itself, though. It often goes back and forth recommending obsolete or unavailable libraries or suggesting moving a line to the wrong place, then recommending I move it back in the next iteration. It's smart and often makes very good recommendations for improving and optimizing code, and it spots subtle bugs and typos very easily. It's also stupid and often makes terrible recommendations that will break your application.
On the hobby side, I've been making AI art, building Stable Diffusion on my PC and even training some LORAs. The vast majority of AI art is, as they say, "slop" and very recognizable as AI, but that's mostly because the vast majority of AI art is "Type a few sentences into text box, copy picture that results." "A cat making a face." "A cute catgirl with an assault rifle giving a come-hither look to her incel AGP fetishist fans." You will get a picture that meets your requirements, but will be very obviously plastic AI digital slop, like a Walmart t-shirt or a Subway sandwich. If you take the time to learn about inpainting and outpainting and ControlNet and upscaling and advanced prompt engineering and model selection and what all the parameters actually tweak, you'll get good pictures, pictures good enough to win Scott's various AI challenges.
Are they good enough for an AI to become a renowned professional artist with a unique and recognizable style? Not yet. But artists are rage-coping hard insisting they aren't good enough to replace the vast majority of commercial artists who just need to draw hamburgers or cars or Corporate Memphis HR posters, or commissioned MCU fanservice. The sticking point now is no longer extra fingers or shadows going in the wrong direction (though most AIs will still make little mistakes that are tells for the observant- but these can be easily repaired!) but just the fact that it's still painful to go back and forth to get exactly the pose, position, expression, color shade, background, accessories, species of flower, that you want. With real artists you can talk to the artist, and the artist can do rough sketches and ask clarifying questions. With AIs, you generate 100 images, let GPU go brrrrr, and maybe you get one or two that are kinda close and still need extensive inpainting and photoshopping. Conversely, though, I have commissioned some artists in the past and while I was generally satisfied with the results, even a human will never be able to really represent the picture that's in your head. Enough time with Stable Diffusion and some photoshop ability will often actually come closer to the mark. AI art is getting better all the time, but IMO, it is not close to replacing truly talented high-end artists, just as AI is not close to replacing actual rock star programmers and innovators.
It is close to replacing the print shoppers, the commercial graphic arts majors, the SEO optimizers and storefront webapp builders, though.
So, can it write?
Yes and no. I've tried out some of the NovelAI apps and gazed upon the sorry state of Kindle Unlimited, already flooded with thousands of subpar self-published romantasy-written-while-fingering-herself slop and power-fantasy-written-while-jerking-himself slop, and now that has been multiplied seven and sevenfold by AIs churning out the results of all those Udemy and YouTube courses promising you can now make a living on Amazon without actually writing anything. Throw a million books out there with pretty covers and even if you make pennies per title, it adds up. AI has been devastating the short story market for a while now.
If we get to the point where AI can generate good stories, then... I guess I'd be happy to read AI-generated stories? I think we are a long, long way from there, though. And I have experimented. LLMs can generate coherent stories at this point. They have a plot, and some degree of consistency, and I suppose they have all the traditional elements of a story. I am not sure if they are up to generating an entire novel with one prompt yet - I haven't tried, but I know there are tools to let you coach it along to get a whole novel out of it.
But everything I have seen so far is crap. In fairness, most of what's on RoyalRoad (and Wattpad and A03 and Scribd and all the other open platforms) is crap, but you can still tell what's human-written crap and what's AI slop.
I may be in the minority here; it often seems readers just don't care much anymore and want to consoom entertainment units. But waving my snooty literary tastes here, I sometimes despair at the writing some people think is good just because it tickles their
fetishessweet spots. Some genres (progressive fantasies, litrpg, a lot of romance) are written so, so badly that if they aren't AI generated, they may as well be. An AI has no prose style except very poor mimicry of other styles; it has no ability to truly craft words and turn a phrase in a way that makes you say "Ah, yes, that is totally how that author writes." It has no way to embed themes and metaphors that echo throughout a book, it has no thematic consistency (often not even tonal consistency). Character arcs, such as they exist, are flat and linear; LLMs cannot grasp "character development" or complexity or nuance in any real way.If you want a book that's mental bubblegum, a linear power fantasy about a guy getting ever more powerful and punching bigger villains in the face, or a hot chick being fought over by two smoking alphas, and nothing more to it and not even any clever writing to sweeten the experience, just "thing happens and then thing happens and then thing happens" and maybe some purple descriptive modifiers mimicking a high school creative writing exercise, I suppose AIs can do that now. But nothing that even approaches the most derivative pastiches of true classic novels.
And that's just to produce one book. How about a series, a multibook arc preserving plot threads and MacGuffins and character development from one book to the next? An AI cannot do that, and I doubt their ability to do that any time soon.
If you're not really a reader and consuming stories is like popping open a beer and you don't care how it tastes as long as it gives you a buzz, maybe AIs will fill that entertainment need. I sometimes put AI-generated soundtracks on as background music, and while the first few minutes can be okay, after a while it sounds very samey and droney and repetitive, even to my extremely unsophisticated ear (and my musical tastes are, in contrast to my literary tastes, utterly banal and horrible).
I don't doubt AI will continue to improve and eventually we'll have the first award-winning novel completely written by AI that even experts agree is actually... kinda good. But I am skeptical. I think it will take a while. I think even when we get to that point it will be a very particular kind of novel that uses some tricks (like being a surrealist or post-modern experimental novel or something else that avoids the usual conventions of narrative structure and story development).
I think it will be a long, long time before we have an AI Stephen King or Kazuo Ishiguro or Margaret Atwood. But I think we will have AI "authors" doing a "good-enough" job for the proles. Whether the slow-motion death of traditional publishing is a good thing or not I guess depends on how much you hate traditional publishing. I think gatekeeping is good, and that's what traditional publishing does. Publishers put out a lot of books I am not interested in and even think are very bad, but I can at least tell from the cover, the blurbs, and the author if it's likely to meet my minimal standards of readability. It's not like sifting through sewage for something sparkly. More like picking a few good apples out of a bin of mostly rotten ones.
I celebrate the flourishing of platforms for anyone to put their work out there and a handful of indie authors are killing it on Amazon, but increasingly they are no different from the handful of authors who make it big in trad publishing- there are a handful of big winners, but most earn below minimum wage for their efforts, and now many thousands who basically earn beer money if that are competing with LLMs who can scratch the same itch they do.
I am kind of in the middle ground between "they are just stupid stochastic parrots, they don't think!" and "obviously they will develop super-intelligent subagents if we just throw more neurons at the problem!", while I suspect that you are a bit more likely to agree with the former.
The latter case is easy to make. If you train a sufficiently large LLM on chess games written in some notation, the most efficient way to predict the next token will be for it to develop pathways which learn how to play chess -- and at least for chess, this seems to mostly have happened. Sure, a specialized NN whose design takes the game into account will likely crush an LLM with a similar amount of neurons, but nevertheless this shows that if your data contains a lot of chess games, the humble task of next-token-prediction will lead to you learning to play chess (if you can spare the neurons).
By analogy, if you are trained on a lot of written material which took intelligence to produce, it could be that the humble next-token-predictor will also acquire intelligence to better fulfill its task.
I will be the first to admit that LLMs are horribly inefficient compared to humans. I mean, a LLM trained on humanity's text output can kinda imitate Shakespeare, and that is impressive in itself. But if we compare that to good old Bill, the latter seems much more impressive. The amount of verbal input he was trained on is the tiniest fraction of what an LLM was trained on, and Shakespeare was very much not in the training set at all! Sure, he also got to experience human emotions first-hand, but having thousand of human life-years worth of description of human emotions should be adequate compensation for the LLM. (Also, Bill's output was much more original than what a LLM will deliver if prompted to imitate him.)
Of course, just because we have seen an LLM train itself to grok chess, that does not mean that the same mechanism will also work in principle and in practice to make it solve arbitrary tasks which require intelligence, just like we can not conclude from the fact that a helium balloon can lift a post card that it is either in principle or in practice possible with enough balloons to lift a ship of the line and land it on the Moon. (As we have the theory, we can firmly state that lifting is possible, but going to the Moon is not. Alas, for neural networks, we lack a similar theory.)
More on topic, I think that before we will see LLMs writing novels on their own, LLMs might become co-authors. Present-day LLMs can already do some copy-editing work. Bouncing world building ideas off an LLM, asking 'what could be possible consequences for some technology $X for a society' might actually work. Or someone who is skilled with their world-building and plotlines but not particularly great at finding the right words might ask an LLM to come up with five alternatives for an adjective (with connotations and implications) and then pick one. This will still not create great prose, but not everyone reads books for their mastery of words.
Yeah, but surprisingly poorly. 2024-era LLMs can be prompted to play chess at amateur to skilled amateur levels, but to get to the superhuman levels exhibited by doing move evaluations with a chess-specific neural net, you need to train it using self-play too, and to get to the greatly-superhuman levels exhibited by the state-of-the-art chess neural networks of several years ago, you need to also combine the neural nets with a framework like Monte Carlo Tree Search. Just pushing human data into a neural network only gets you a third of the way there.
I'd guess that the "just pushing human data into a neural network only gets you a third of the way there" rule of thumb applies to a lot more than just chess, but it's a lot harder to "self-play" with reality than it is with chess, so we can't just make up the difference with more core-hours this time. Using "reasoning" models has helped, a little like how tree search helps in chess, by allowing models to try out multiple ideas with more than just one token's worth of thinking before backtracking and settling on their answer, but with a chess or go tree search there's still a ground truth model keeping things from ever going entirely off the rails, and reasoning models don't have that. I'm not sure what the AGI equivalent of self-play might be, and without that they're still mostly interpolating within rather than extrapolating outside the limits of their input data. Automation of mathematical proofs is perhaps the most "real-world" area of thought for which we can formalize (using a theorem language+verifier like Lean as the ground truth) a kind of self-play, but even if we could get LLMs to the point where they can come up with and prove Fermat's Last Theorem on their own, how much of the logic and creativity required for that manages to transfer to other domains?
More options
Context Copy link
I think this falls into the "shoggoth wearing a smiley face mask" meme that came about last year.
Its very clear to me that there's something in there that we can consider "intelligent" that is performing "reasoning." (I avoid the terms "cognition" and "consciousness" or "qualia" here).
It takes inputs, performs some kind of calculations and produce an output that is meaningfully derived from the inputs and this means it can do useful 'work' with that info. Inferences, formation of beliefs, and possibly analyzing the truth value of a statement.
But the processes by which it does that DO NOT resemble human intelligence, we've just made it capable of accepting human-legible inputs and expressing its outputs in human legible form too.
So expecting it to think 'efficiently' the way humans do is missing the forest for the trees. Or perhaps the brain for the neurons.
Hell, maybe it never really masters novel-writing before it gets smart enough to kill everyone, but it got good at the set of skills it needed while we were trying to teach it to write novels.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link