This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
This was a response to @cjet79:
But I decided I would make it a top comment, because it's my second favorite subject after sci-fi bullshit: literary snobbery with a side of AI.
First, I like AI. I mean, I like it as a tool. (And yes, I know that "AI" is still a misnomer, I understand that LLMs are just token predictors, and I think people who believe that any neural net is close to actually "thinking" or becoming self-aware, or that really, what are we but pattern-matching echolaliac organisms? are drinking kool-aid). I've used ChatGPT to build applications (I don't do "vibe coding" but I have found it increases my productivity because with the right prompts it helps me use new applications and libraries faster than I could by going through tutorials and manuals). It cannot build a fully functional application (beyond the simplest) by itself, though. It often goes back and forth recommending obsolete or unavailable libraries or suggesting moving a line to the wrong place, then recommending I move it back in the next iteration. It's smart and often makes very good recommendations for improving and optimizing code, and it spots subtle bugs and typos very easily. It's also stupid and often makes terrible recommendations that will break your application.
On the hobby side, I've been making AI art, building Stable Diffusion on my PC and even training some LORAs. The vast majority of AI art is, as they say, "slop" and very recognizable as AI, but that's mostly because the vast majority of AI art is "Type a few sentences into text box, copy picture that results." "A cat making a face." "A cute catgirl with an assault rifle giving a come-hither look to her incel AGP fetishist fans." You will get a picture that meets your requirements, but will be very obviously plastic AI digital slop, like a Walmart t-shirt or a Subway sandwich. If you take the time to learn about inpainting and outpainting and ControlNet and upscaling and advanced prompt engineering and model selection and what all the parameters actually tweak, you'll get good pictures, pictures good enough to win Scott's various AI challenges.
Are they good enough for an AI to become a renowned professional artist with a unique and recognizable style? Not yet. But artists are rage-coping hard insisting they aren't good enough to replace the vast majority of commercial artists who just need to draw hamburgers or cars or Corporate Memphis HR posters, or commissioned MCU fanservice. The sticking point now is no longer extra fingers or shadows going in the wrong direction (though most AIs will still make little mistakes that are tells for the observant- but these can be easily repaired!) but just the fact that it's still painful to go back and forth to get exactly the pose, position, expression, color shade, background, accessories, species of flower, that you want. With real artists you can talk to the artist, and the artist can do rough sketches and ask clarifying questions. With AIs, you generate 100 images, let GPU go brrrrr, and maybe you get one or two that are kinda close and still need extensive inpainting and photoshopping. Conversely, though, I have commissioned some artists in the past and while I was generally satisfied with the results, even a human will never be able to really represent the picture that's in your head. Enough time with Stable Diffusion and some photoshop ability will often actually come closer to the mark. AI art is getting better all the time, but IMO, it is not close to replacing truly talented high-end artists, just as AI is not close to replacing actual rock star programmers and innovators.
It is close to replacing the print shoppers, the commercial graphic arts majors, the SEO optimizers and storefront webapp builders, though.
So, can it write?
Yes and no. I've tried out some of the NovelAI apps and gazed upon the sorry state of Kindle Unlimited, already flooded with thousands of subpar self-published romantasy-written-while-fingering-herself slop and power-fantasy-written-while-jerking-himself slop, and now that has been multiplied seven and sevenfold by AIs churning out the results of all those Udemy and YouTube courses promising you can now make a living on Amazon without actually writing anything. Throw a million books out there with pretty covers and even if you make pennies per title, it adds up. AI has been devastating the short story market for a while now.
If we get to the point where AI can generate good stories, then... I guess I'd be happy to read AI-generated stories? I think we are a long, long way from there, though. And I have experimented. LLMs can generate coherent stories at this point. They have a plot, and some degree of consistency, and I suppose they have all the traditional elements of a story. I am not sure if they are up to generating an entire novel with one prompt yet - I haven't tried, but I know there are tools to let you coach it along to get a whole novel out of it.
But everything I have seen so far is crap. In fairness, most of what's on RoyalRoad (and Wattpad and A03 and Scribd and all the other open platforms) is crap, but you can still tell what's human-written crap and what's AI slop.
I may be in the minority here; it often seems readers just don't care much anymore and want to consoom entertainment units. But waving my snooty literary tastes here, I sometimes despair at the writing some people think is good just because it tickles their
fetishessweet spots. Some genres (progressive fantasies, litrpg, a lot of romance) are written so, so badly that if they aren't AI generated, they may as well be. An AI has no prose style except very poor mimicry of other styles; it has no ability to truly craft words and turn a phrase in a way that makes you say "Ah, yes, that is totally how that author writes." It has no way to embed themes and metaphors that echo throughout a book, it has no thematic consistency (often not even tonal consistency). Character arcs, such as they exist, are flat and linear; LLMs cannot grasp "character development" or complexity or nuance in any real way.If you want a book that's mental bubblegum, a linear power fantasy about a guy getting ever more powerful and punching bigger villains in the face, or a hot chick being fought over by two smoking alphas, and nothing more to it and not even any clever writing to sweeten the experience, just "thing happens and then thing happens and then thing happens" and maybe some purple descriptive modifiers mimicking a high school creative writing exercise, I suppose AIs can do that now. But nothing that even approaches the most derivative pastiches of true classic novels.
And that's just to produce one book. How about a series, a multibook arc preserving plot threads and MacGuffins and character development from one book to the next? An AI cannot do that, and I doubt their ability to do that any time soon.
If you're not really a reader and consuming stories is like popping open a beer and you don't care how it tastes as long as it gives you a buzz, maybe AIs will fill that entertainment need. I sometimes put AI-generated soundtracks on as background music, and while the first few minutes can be okay, after a while it sounds very samey and droney and repetitive, even to my extremely unsophisticated ear (and my musical tastes are, in contrast to my literary tastes, utterly banal and horrible).
I don't doubt AI will continue to improve and eventually we'll have the first award-winning novel completely written by AI that even experts agree is actually... kinda good. But I am skeptical. I think it will take a while. I think even when we get to that point it will be a very particular kind of novel that uses some tricks (like being a surrealist or post-modern experimental novel or something else that avoids the usual conventions of narrative structure and story development).
I think it will be a long, long time before we have an AI Stephen King or Kazuo Ishiguro or Margaret Atwood. But I think we will have AI "authors" doing a "good-enough" job for the proles. Whether the slow-motion death of traditional publishing is a good thing or not I guess depends on how much you hate traditional publishing. I think gatekeeping is good, and that's what traditional publishing does. Publishers put out a lot of books I am not interested in and even think are very bad, but I can at least tell from the cover, the blurbs, and the author if it's likely to meet my minimal standards of readability. It's not like sifting through sewage for something sparkly. More like picking a few good apples out of a bin of mostly rotten ones.
I celebrate the flourishing of platforms for anyone to put their work out there and a handful of indie authors are killing it on Amazon, but increasingly they are no different from the handful of authors who make it big in trad publishing- there are a handful of big winners, but most earn below minimum wage for their efforts, and now many thousands who basically earn beer money if that are competing with LLMs who can scratch the same itch they do.
When comparing AI drawing abilities to writing abilities, I think a key difference is that for us as humans drawing slop is harder than writing slop. What I mean by this is that an AI can generate something like a 4K image of aliens with energy swords rendered in a hyper detailed yet ultra generic fashion, and the output is something that only 0.01% of humans have the technical skill to create. But when you ask an AI to write pages and pages of Shakespeare and the results are cliché ridden gibberish, you can only look at it and say that other than in terms of scale, I could’ve easily made this crap, if through some dada-style cut-and-paste technique if nothing else. Essentially putting letters in sequence to form words is easier than drawing entire images, and so when AI generates images, even though they’re just reconstituted from data sets on the basis of predictive software and so involve no actual drawing, they still seem impressive.
More options
Context Copy link
But you then go on to talk about how its helpful to you, how it can do art and coding and stuff. Doesn't that mean it's thinking? What is thinking if not intellectual labour that produces some kind of useful output?
See the cartoons here: https://x.com/emollick/status/1920700991298572682
How are these not proper newspaper-tier cartoons? It's not just pattern-matching, see the Cthulhu ones. How does that not require some kind of thought? If thought isn't required to make them, then so much the worse for thought. They're more amusing than many actual New Yorker cartoons.
What model are you talking about? When you say ChatGPT, that could be GPT4omini. It could be GPT4o. It could be o3-mini, o1-pro, o4-mini-high, GPT4.5 (RIP). OpenAI does a very good job at confusing people here but there are major differences between 'slop for free' and 'serious compute for the subscribers'.
With a lot of finnagling and wrangling, I can make Sonnet 3.7 produce a fully functional application with a database, logging, UI (admittedly not a fantastic UI), user authentication... It's not exactly simple, maybe 8000 lines of code, some quite long and complex functions. I'm nontechnical. It does need my human wisdom and feedback but nonetheless, it's writing all the code. And while the code isn't perfect, it is fully functional.
I detect a fair bit of warranted snobbishness from those initiated in the tech world about AI. Yes, there are a bunch of idiots making simple apps on localhost:5000 and not even knowing what that means or why their bros can't click the link. Yet there is also unwarranted snobbishness. There are people making real projects with AI alone and earning revenue. See levelsio on twitter, he was making money with his multiplayer plane game thing. It's not a AAA game but it shows that this isn't just a toy.
See also this one-shot coding challenge from gemini, this isn't exactly simple stuff: https://x.com/elder_plinius/status/1922126885783281755
I observed Sonnet 3.6 inserting themes in a story unprompted, it was a noticeable difference from 3.5. Not amazing themes but themes consistently and consciously referenced nonetheless.
More options
Context Copy link
I think for a lot of genre fiction, an AI book edited by a human would probably be just fine for the median reader. Most of the published books in genre fiction are written to be read quickly and forgotten just as quickly, written more for people who want to read in transit between places (say on a bus, train, or plane) or as a pastime on vacation. It’s not nor was it ever intended to be serious reading. And while I don’t think AI at present can write well enough to be read as a beach read, it can produce something that would be publishable with a reasonable amount of developmental and line editing.
The advice for producing such novels is actually pretty cookie cutter. There are known plot development tools (save the cat is the most common), character development sheets, and style advice. Training an AI to use the beat sheets and other advice would produce a reasonable rough draft of a novel. Editing those novels might still require a human touch, but it’s probably not prohibitively expensive.
More options
Context Copy link
In terms of slop, I’m surprised Amazon hasn’t cracked down on AI-generated knockoff scams. I recently purchased Graydon Carter’s new memoir, and in searching for “Graydon Carter memoir”, the first result returned was the actual hardcover, When the Going Was Good: An Editor’s Adventures During the Last Golden Age of Magazines.
Then, the AI-generated paperbacks and e-books immediately followed: Graydon Carter Memoir 2025: From the Golden Age of Mazines to the Digital…, Graydon Carter Memoir 2025: When the Going Was Grand, Graydon Carter Memoir: When the Going Was Perfect, Graydon Carter: The Biography…
Perhaps Amazon is just dealing with a game of whack-a-mole, or maybe they don’t really care?
This is amazing, it's like Borges' Library of Babel.
I think if I were Amazon, I think I'd have a hard time drawing a line between actual content and low-effort slop. Though honestly that sounds like a great use for LLMs.
That’s a great reference. ❤️
More options
Context Copy link
More options
Context Copy link
Given all the horror stories I've casually heard regarding chinese sellers and re-sellers on Amazon, I doubt they care at all.
More options
Context Copy link
More options
Context Copy link
I think I'm a little bit broken in my set of preferences for certain art forms. For a long time I've lacked the ability to understand and explain why. Video games have helped, but music might have the best metaphor, even if it doesn't apply to me.
First, imagine that there is an objective ranking for how good a piece of music can be. The ranking stands regardless of individual preferences. More sophisticated listeners who can appreciate music better will have their preferences more in line with this objective ranking.
Second, imagine you have some unique ears, and the sound of string instruments just really bothers you. So you prefer any music without string instruments.
Most of the best music includes some string instruments, so you end up not liking most of the "best" music. The best rating doesn't require string instruments, its just that it makes some things easier in the course of crafting the music. A theoretical best song could be crafted that has no string instruments, it would just be much more difficult. Your tastes end up looking very unsophisticated. You gravitate towards an amateur community of song writers that share your hatred of string instruments, and some of them are just bad at writing any songs with string instruments. They write songs that are relatively bad on the objective ranking, but it removes string instruments at least, so it becomes more tolerable than mainstream stuff for you.
Something like this has happened to me in regards to reading and literature. There are common story elements like certain foreshadowing techniques and certain character development tricks that really grate on me. And there are story settings that I dislike, mostly modern and non-magical settings are boring to me.
I've ended up in a weird spot, like the stringed instrument hater. I can only really enjoy the other authors that also hate stringed instruments, or the amateurs that can't even write stringed instruments into their music. I am probably reading stories and literature that is "objectively" worse on some cosmic literature scale, and I'm well aware that it makes my tastes look unsophisticated and "bad" to the elites of the literature world. But I can't stop and won't stop, because I have some subjective preferences that entirely override the importance of the objective scale.
That's interesting, any examples of what literature elements you don't like?
A lot of foreshadowing techniques. It takes me out of the story when I see it, because I'm strongly reminded that it is a story with an end destination in mind by the author.
Certain ways of handling characters. Death for side characters when the author wants them out of a story. Torture or horrible circumstances for a main character as a way to toughen them up or get the reader to feel sorry for them.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My full original comment:
Re-reading my first sentence as standalone I guess it could be interpreted one of two ways:
I meant it in the second sense. I definitely think the AI stories right now are a lot of hot garbage, for all the reasons you've mentioned.
More options
Context Copy link
I am kind of in the middle ground between "they are just stupid stochastic parrots, they don't think!" and "obviously they will develop super-intelligent subagents if we just throw more neurons at the problem!", while I suspect that you are a bit more likely to agree with the former.
The latter case is easy to make. If you train a sufficiently large LLM on chess games written in some notation, the most efficient way to predict the next token will be for it to develop pathways which learn how to play chess -- and at least for chess, this seems to mostly have happened. Sure, a specialized NN whose design takes the game into account will likely crush an LLM with a similar amount of neurons, but nevertheless this shows that if your data contains a lot of chess games, the humble task of next-token-prediction will lead to you learning to play chess (if you can spare the neurons).
By analogy, if you are trained on a lot of written material which took intelligence to produce, it could be that the humble next-token-predictor will also acquire intelligence to better fulfill its task.
I will be the first to admit that LLMs are horribly inefficient compared to humans. I mean, a LLM trained on humanity's text output can kinda imitate Shakespeare, and that is impressive in itself. But if we compare that to good old Bill, the latter seems much more impressive. The amount of verbal input he was trained on is the tiniest fraction of what an LLM was trained on, and Shakespeare was very much not in the training set at all! Sure, he also got to experience human emotions first-hand, but having thousand of human life-years worth of description of human emotions should be adequate compensation for the LLM. (Also, Bill's output was much more original than what a LLM will deliver if prompted to imitate him.)
Of course, just because we have seen an LLM train itself to grok chess, that does not mean that the same mechanism will also work in principle and in practice to make it solve arbitrary tasks which require intelligence, just like we can not conclude from the fact that a helium balloon can lift a post card that it is either in principle or in practice possible with enough balloons to lift a ship of the line and land it on the Moon. (As we have the theory, we can firmly state that lifting is possible, but going to the Moon is not. Alas, for neural networks, we lack a similar theory.)
More on topic, I think that before we will see LLMs writing novels on their own, LLMs might become co-authors. Present-day LLMs can already do some copy-editing work. Bouncing world building ideas off an LLM, asking 'what could be possible consequences for some technology $X for a society' might actually work. Or someone who is skilled with their world-building and plotlines but not particularly great at finding the right words might ask an LLM to come up with five alternatives for an adjective (with connotations and implications) and then pick one. This will still not create great prose, but not everyone reads books for their mastery of words.
Yeah, but surprisingly poorly. 2024-era LLMs can be prompted to play chess at amateur to skilled amateur levels, but to get to the superhuman levels exhibited by doing move evaluations with a chess-specific neural net, you need to train it using self-play too, and to get to the greatly-superhuman levels exhibited by the state-of-the-art chess neural networks of several years ago, you need to also combine the neural nets with a framework like Monte Carlo Tree Search. Just pushing human data into a neural network only gets you a third of the way there.
I'd guess that the "just pushing human data into a neural network only gets you a third of the way there" rule of thumb applies to a lot more than just chess, but it's a lot harder to "self-play" with reality than it is with chess, so we can't just make up the difference with more core-hours this time. Using "reasoning" models has helped, a little like how tree search helps in chess, by allowing models to try out multiple ideas with more than just one token's worth of thinking before backtracking and settling on their answer, but with a chess or go tree search there's still a ground truth model keeping things from ever going entirely off the rails, and reasoning models don't have that. I'm not sure what the AGI equivalent of self-play might be, and without that they're still mostly interpolating within rather than extrapolating outside the limits of their input data. Automation of mathematical proofs is perhaps the most "real-world" area of thought for which we can formalize (using a theorem language+verifier like Lean as the ground truth) a kind of self-play, but even if we could get LLMs to the point where they can come up with and prove Fermat's Last Theorem on their own, how much of the logic and creativity required for that manages to transfer to other domains?
MuZero would like a word.
At one point during training, the training environment was needed to keep MuZero from going off the rails and making illegal moves. Once it learns the rules of the game well enough, though, the policy network becomes sufficiently unlikely to output illegal moves that I expect it would continue to improve indefinitely through self play without sampling any illegal moves.
I do wonder if anyone has tried that experiment. It seems like it gets at the core of one of the cruxes people have about recursive self improvement.
More options
Context Copy link
More options
Context Copy link
I think this falls into the "shoggoth wearing a smiley face mask" meme that came about last year.
Its very clear to me that there's something in there that we can consider "intelligent" that is performing "reasoning." (I avoid the terms "cognition" and "consciousness" or "qualia" here).
It takes inputs, performs some kind of calculations and produce an output that is meaningfully derived from the inputs and this means it can do useful 'work' with that info. Inferences, formation of beliefs, and possibly analyzing the truth value of a statement.
But the processes by which it does that DO NOT resemble human intelligence, we've just made it capable of accepting human-legible inputs and expressing its outputs in human legible form too.
So expecting it to think 'efficiently' the way humans do is missing the forest for the trees. Or perhaps the brain for the neurons.
Hell, maybe it never really masters novel-writing before it gets smart enough to kill everyone, but it got good at the set of skills it needed while we were trying to teach it to write novels.
More options
Context Copy link
More options
Context Copy link
This feels like MOSTLY a solved problem with ChatGPT's o3 image generation capability.
You can feed it a few reference images for what you're trying to get to, including poses and background with a sufficiently precise prompt you WILL get something very, VERY accurate to your intentions. It does NOT do a great job on making precise adjustments from there, and currently it doesn't do inpainting but take the image it produced and running it through Stable Diffusion or just manual photoshop gets you to the finish line.
One thing its is actually very good at is feeding it an image representing a tattoo you're thinking of getting, feeding it an image of your bare skin in the area you want that tattoo, then it can produce an image showing you what that tattoo would look like. And THEN you can pay a human artist to hopefully execute on that vision well.
I have had annoying problems where it remembers something you asked for earlier and keeps including that in the image even after you tell it to move on or forget, but that's fixed by starting a new window with the most recent output.
I don't see how a human artist can outcompete this on cost or time. I CAN see how you might still pay a human to actually do the work of interacting with the AI and modifying outputs to get close to a particular vision.
Similarly, SONG PRODUCTION is now just about indistinguishable from full human now. To me, a decently done full AI song will have almost zero tells unless the creator set out to make it obvious.
Betting against the AI capabilities approaching peak human is probably a losing proposition unless we ARE very, very close to the plateau of what can be achieved with the current paradigm.
AI is now better than the best chess players, and better than the best GO players, and while Novel-writing is a different combination of skills and intellect than either of those, the AIs have already learned to write coherently and so I expect tacking on the additional capabilities will scale the machine into Stephen King territory pretty quickly.
My understanding is that o3 image gen is identical to the regular chatgpt image gen (famed for the ghiblification wave). Both cases call out to gpt-image-1 which to be fair is much better than dalle and stable diffusion and the like at following prompts.
https://platform.openai.com/docs/models/gpt-image-1 https://platform.openai.com/docs/models/o3
This is actually a very defensible position
It was a very defensible position every time, then some new advance blew past it.
I'll listen to the defense, but I'm not placing my bets on that side.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I am getting pissed off with the AI assistant crap being suggested to me at work (no, Adobe, I do not need the inbuilt AI assistant to "simplify this document for me" when I'm reconciling a blinkin' bank statement) and I think much of the enthusiasm over AI is because it's all software engineering.
It seems to be useful (not perfect but useful) if you're writing code. Or if you're dumping your homework on it to write your essays and cheat your exams for you. But for use by ordinary people otherwise? Apart from the slop art and extruded fiction product you mention, I don't yet see it doing anything useful.
I don't need it to write a shopping list and order online for me, just in case someone wants to use that as an example. That's for people who only buy the same things over and over and have more money than time.
If anyone has an example of "this is how I use it for work/at home and it really saves me time and mental energy", I'd be glad to hear.
I've fed prose I've already written into it to make refinements or check for quality. I just wish you could get it to stop glazing everything put in front of it.
I prompted Deepseek with:
and it replied dryly enough, although don't quote me on the quality of its advice.
More options
Context Copy link
More options
Context Copy link
I've used AI to write cover letters on job applications. One of those applications got me a teaching job which paid $20,000 more than what I was doing before, so if the cover letter made any difference ChatGPT Plus has more than paid for itself.
On the same job, I used it to generate a slogan which the administrators liked, and some images including the school mascot which had a very positive reception.
Sadly, it didn't save me from getting fired at the end of the year for failing to control the kids.
More options
Context Copy link
I wanted to make slow-rising pizza dough from scratch. The online slow-rising whole-wheat bread recipe I liked gave all proportion in weights. I don't have a cooking scale. So I uploaded it to Claude and asked it to convert the recipe to cup measurements. I noticed the water-to-flour ratio has changed, so I had it explain why, and learned quite a bit about the role protein plays in dough. Then I had it re-do the recipe, substituting semolina for a quarter of the flour. Finally, I had it scale the recipe for two particularly sized pizzas I planned to make. Time: about 10 minutes, because I side-tracked into the protein thing and had to check it out elsewhere.
Two days later, I get two delicious pizzas.
I would love to hear more.
Higher-protein flour absorbs more water, and that protein is what makes the dough elastic. All-purpose flour is 9-11% protein, bread flour 11-13% (depending on brand), semolina 12-13%, whole-wheat 13-14%. Recipes with the higher-protein flour will need more water.
I have learned, through trial and much error, never to make bread either from all-purpose flour, or from 100% whole-wheat.
Also, it seems that bakers really do rely on ratio-by-weight, rather than ratio-by-volume. When I asked Claude to convert the recipe from 1000 grams of "Typo 00" flour to have a quarter of it be semolina, it gave me:
Noticing that the cups did not add up to a quarter of semolina, I asked it to re-check its calculations or explain its results, and it did, explaining that semolina is denser.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't, but I have a related observation. Because there are differently capable LLMs available, we have not a "one screen two movies" a situation but reverse, "nominally same movie in two screens". One screen is 4k ultraHD and other is camcorded VHS tape. In this thread and other forums, savvy people truly test this shit, constantly trying out which is the current state of the art, and enthusiastically adopt it, and report amazing results. I almost believe it is that good. Meanwhile, at work, my coworkers are not savvy at all yet enthusiastically adopt the default-tier ChatGPT. Which is shit. I call it ShitGPT.
I have watched how "senior" engineer who used to bit perhaps above his skill level but quite okay starting do quite stupid stuff, like in live code review call call copypasting ChatGPT outputs without looking at it, including the time when ShitGPT decided it wants to write the answer in C# instead of C++. Another engineer caused a week of mayhem because he uncritically trusted ChatGPT "summarization" of library documentation, except halfway the ShitGPT had stopped summarizing documentation and switched to hallucinating, causing the most curious bugs.
More options
Context Copy link
I occasionally find it useful for queries that don't work well in ordinary Internet search engines. Here's my Gemini history.
More options
Context Copy link
More options
Context Copy link
I lament the continued decline in quality for modern media, particularly in writing and pacing. A part of me hopes studios start using AI so it either blows up in their faces and they start valuing quality again, or it creates so much slop some of them focus on quality to stand out.
In somewhat similar vein I care not a zilch about some acquaintances’ complaints about AI coming to music and overriding everything with slop. From my point of view that already happened 25 years ago, only with human slop (aka modern trends) not just overwhelming quality stuff in volume but outright putting it in front of execution squad and pulling all the triggers at once to ensure none of it remains except as old recordings.
More options
Context Copy link
More options
Context Copy link
While I do agree with everything substantive and specific you wrote, I think the framing falls into a trap common to a lot of thinking about AI. Specifically, that AI will simply extend or accelerate a given domain and technology. In this case, publishing and fiction.
There's not going to be an AI written book that wins any prestigious award. This is because it would be foolish to simply have an AI write one immutable story. Instead, "AI writers" will be either fine-tuned or wholly trained models that people use to write stories on the fly that still adhere to a central plot, world, and character collection.
To use a common reference point, let's take Game of Thrones. People have their favorite characters, subplots, settings, etc. With an AI-writer-model, you could say "Hey, write a new subplot where that blonde with the dragons and whatnot flies on up to the blizzard place and fucks around for a while." (side note: I never watched or read GoT, ironically enough, so all my references are going to be bad)
Now, you're creating new content that still stays within the "world" of GoT. And it works at innumerable levels of detail. The casual consooomer will write one sentence slop generator stuff - and love it. The aficionado will create complex subplots and tweak small elements of character profiles to see how these reverberate throughout the grander story. I predict that once the cost of GPUs gets low enough (or models get efficient enough) people will literally be writing and producing full scale movies at home.
Instead of human authors and writers being the nucleus of "art" it will be a constellation of models, with humans recombining them ad infinitum. I look at this as a good thing. You can un-cancel your favorite show (The Wire!), Hemingway becomes immortal and produces infinite books. Unlimited GoT fanfic erotic (......yay?)
I know this will happen because I'm already doing it. My mental bubblegum is hardboiled neo-noir paperbacks. Think something in the vein of The Last Good Kiss. Over the course of a dozen 2 - 3 hour evenings, I've put together a GitHub repo of characters, settings, themes etc. I've used an AI toolchain to develop scenes. I then line edit them mostly for continuity issues (which AI still stumbles on) or to make a sudden plot twist because I feel like it. I am not doing this to publish a book. I am doing this because I genuinely find it far more entertaining and exciting that Netflix scrolling or re-watching the actually good stuff. And it's low stakes. I don't really care if the plot doesn't quite hold together. I don't care if a character's motivation self-contradicts after a while. It's fun. It's unlimited fun. Over the 40+ hours I've put into it, I've probably spent $100 in API credits. You can definitely argue that's actually quite a bit less cost effective than Netflix etc. But I believe the received value is excellent.
AI will not be a linear extension of current industries. I'm not saying it's a step-function for everything either. It will simply be a very hard to predict tangent. In many cases, this will be absolutely good for all parties. In many cases it will be a massive tradeoff and shift in the "center of gravity." I think there are only a few cases I can see where it represents a system-breaking potential.
Anyways, I'm off to writeread about Detective Jar-Jar Binks' latest case involving Anton Chigurh.
I feel like apps like Infinite Worlds are already tapping into this kind of thing. It's a relatively decent AI Choose-Your-Own-Adventure website that allows for a human creative to set up "worlds" with set plot points or details for players to play through. It's not as good as my favorite AI-powered game to date (the sadly defunct Medieval Problems), but it seems to have taken a writing forum I frequent by storm.
More options
Context Copy link
The problem for writers is that outside of really amazing talents, 99% of it is some flavor of bubble gum. You don’t need to create award winning stuff.
Take science fiction. A lot of it is some form of retooled space opera. If I took the basics of such a story — a story set in space, space battles, robots, and so on — I’d probably be able to prompt a LLM into producing a decent first draft of a space opera. Taking story elements and recombining them is how the shelves of barns and noble get filled. Yes, most of it, from a literary standpoint, is crap. I don’t think most people who appreciate good literary fiction are going to dispute that 99% of the stuff available is even decent as literature. It’s only better than trash TV in the sense that it requires you read the story yourself rather than having actors read the story to you. But then again that’s what the public generally wants in entertainment. They don’t want to have to think about what they’re reading, they don’t even really want to notice any particular literary quirks of the author. They want to mostly escape the world and for the most part be able to congratulate themselves for reading instead of watching a video or playing spider solitaire on their phones.
More options
Context Copy link
Yes, it is already happening, and it was even before AI. Entertainment media can be provided bespoke - that's exactly what artists working on commission do. For a whole lot of people and purposes, the quality/price curve is or very soon will be in AI's favor. I have a couple hours of music about wizards drift racing and I am eager for the moment I can poke at an AI for a bit and receive custom made retro game bubblegum tailored to my exact whims.
More options
Context Copy link
I'm willing to predict a >50% chance that some guy in his basement (okay, maybe expand it to a "dedicated team of five or fewer people") manage to produce a feature length (90 minute) film that is completely AI Generated and, to the general audience's view, is on par with a mid or low-budget Hollywood fare, in terms of 'quality' of the end product... by the end of THIS year. Its already been 1 year since I saw the Shadowglades 'trailer' which, despite being just 2 minutes of disconnected imagery, portrayed a world I would really like to enter and set stories in. And just today those folks put out a new trailer that is just as visually interesting, and much more dynamic and coherent! I can tell who the protagonist is!
I'd predict it WON'T be an action movie because no video AI I've seen can produce a legible fight scene, plus all the model restrictions on depicting violence. Not Scorcese quality for sure, and it'll play to AI's strengths and eschew its shortcomings, but it will be coherent visually and plotwise.
But even if that basement guy started TODAY, if he can produce 1 minute of usable footage a day, on average, it'd be 90 days to get the footage, which leaves another 90 to edit, adjust, produce (AI generated) soundtrack, and fine tune actor performances and 'line reads'. Doable for a dedicated enough, decently talented enthusiast with enough money to burn on the credits. And that assumes someone isn't already halfway done with one already.
I'm already champing at the bit to start work on the pilot episode of an anime adaptation of one of my favorite books, and the early results I've been getting with just the free options available have convinced me I could pull off producing a ~20 minute episode in about 1 month if I were able to fully 'lock in' to doing it. I won't lock in, life just won't allow that right now but it'd be such an invigorating project that, like you and your pulp novel generator, I'd be willing to spend like $100/month or so just working on it for the sheer pleasure of seeing the end product, even if its never published or enjoyed by anyone else.
@faceh @phailyoor It wasn’t done with AI, but one guy in Latvia already managed to effectively make an academy award winning box office success from his basement. It’s called Flow
More options
Context Copy link
How much are you willing to wager on this claim? What are examples of a baseline of mid-budget Hollywood?
"Mid-Budget Hollywood" would be approximately any recent A24 film..
With stringent enough definition and an agreeable arbiter, I'd put up $500 in favor of it, at even odds.
Note I'm NOT saying the film gets a theatrical release or gets published on a streaming platform, just that someone releases the movie for the viewing public, even if its just a random download link, and an average American citizen could watch it and NOT immediately guess it was AI-Generated. Doesn't have to fool a film buff, but also could fool an adult, not just a kid.
I'd also still consider it a win if the film were less than 90 minutes long, but that's the fairish benchmark for 'feature length' that would differentiate it from a TV episode.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My use case is similar to what you describe. A gacha that I play has 300+ playable characters, that’s 50k of possible interactions just one-on-one (100k if you care who’s on top, har-har) — forget official writing, there are not enough fanfic authors for that. DeepSeek might not get what I’m going for with an experiment victim who retains emotions but has trouble expressing them, and a guy with literal emotion transceiver as a race trait who just doesn’t have many emotions, but after pointing it out it can write a passable scene between the two.
I’d be surprised if none of gachas are working on officially integrating such generative functionality already.
More options
Context Copy link
You may be right that AI basically creates a new sort of entertainment experience (e.g., tooling together a pipeline to create your own homebrew fanfiction). And there is nothing wrong with just doing what's fun. My reaction was mostly just, I guess, a defense of actually caring about literary quality. Not that everything you read/enjoy has to be high quality (I like my litrpgs and cheesy space operas too.)
Oh, I think you were right and have a very valid point.
In regards to "high art" literature, I think we're going to see a revenge of the typewriter. Writers will make a point to not only not use AI, but to disconnect entirely and write only from inside their own brain. I earnestly believe some will even resort to using typewriters again as a verifiable medium - there's no way I AI'ed this. Hell, maybe some will even return to longhand.
And this will create both excellent literature, and a snobbery class of weirdo "purebread writers" who still turn out slop, but they do it with artisanal pencils and free-range raised tree paper.
Tree paper? Like the stuff you wipe your ass with? I only read works scribed on vellum.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree.
AI is indistinguishable from a junior dev.
More options
Context Copy link
More options
Context Copy link