MathWizard
Good things are good
No bio...
User ID: 164
"Life is one long series of problems to solve. The more you solve, the better a man you become." -Sir Radzig Kobyla, March of 1403
I have been playing Kingdom Come Deliverance and, while it was already clear that this was not a woke game from pretty early on, this line really drove that nail home for me. You would never here a line like this in a modern American video game. It's not even anti-woke: as a game from the Czech Republic, it's so far removed from the modern American culture war that it just doesn't even care. This is in response to being asked why does God allow so much evil in the world, and the man responds that it's probably a test so we can become better by overcoming it. Everyone is a medieval Catholic (except the evil foreign Cumans who are barbaric and evil, but also way way stronger than your local bandits which makes it terrifying when you stumble on one early game and you probably need to run away instead of fighting), and it's just kind of in the background morality of the individual characters. There's a quest where you go back into the ruins of a town that was just destroyed and still roving with bandits and scavengers in order to bury your murdered parents, putting yourself in danger for no reason other than respect for them and wanting them not to get stuck in purgatory. And yet it's not as if the story is glazing Christianity either, it's got plenty of evil and corrupt people abusing the system, and even a drunk and lecherous priest who is preaching protestant reformation against the Catholic church and their money grubbing ways. Characters believe things because it makes sense for their character to believe that in this culture and the narrative isn't using them as a cudgel to propagandize you that they're obviously right or wrong.
What I think I like about it most of all is that it's an open world Western RPG where your character is... actually a character. You play as Henry, a blacksmith's son from a town, with parents and friends and a personality. He speaks, he has opinions, he makes decisions that you cannot control that drive the plot forward. He is not a blank faceless self insert who gets swept along in some chosen one plot so that you can pretend he is actually you in this world. Henry is Henry in this world, and that gives the writers so much more room to actually write a real story that involves him in it because they can make him do and say things that the story needs a protagonist to do and say. They do a clever job of giving him a bit of moral gray at the beginning with a good and honest father who tells him to do what's right, and a bunch of mischievous friends trying to get him to misbehave, so that whether you decide to run around stealing and murdering or decide to be good and helpful both are still kind of "in character". But there is a character, and I really like that and think that most Western RPGs are missing this.
I haven't finished it yet, so I can't speak for an overall review of how good it stays or how the narrative wraps up in the end, but I am very much liking it so far.
We need Lord StrAInge's Men, a troupe of AIs that can read, review and dismiss AI slop just as quickly as it's written instead of relying on avid human readers.
An AI that can accurately identify and dismiss slop is 90% of the way towards producing quality content, since you could just build the generative AI with that skill built in (and train them on it).
Which is to say, maybe in 10 years this will be a mostly non-issue. If they reach the point where they can generate thousands of genuinely high quality and entertaining stories, I'll happily consume the content. I think "human authorship" as a background principle is overrated. It has some value, but that value is overrated in comparison to the actual inherent value of the work. The problem with slop is that it's not very good, regardless of whether it's generated by humans or AI. Once it's good then we're good.
Survivorship and selection bias works on the population level as well as the individual work level. How many hundreds or thousands of playwrights existed in Shakespeare's time? And yet most are forgotten, while the best of the best (Shakespeare's works) are what are remembered and enjoyed and studied.
Also, there definitely is variation within an individual author's works. How much time and effort do people spend studying "Two Gentlemen of Verona"? Is it actually a good work? Personally I haven't read it, but given how little it's talked about or ranked on people's lists, my guess is that it's mid and the only reason anyone ever talks about it at all is because Shakespeare is famous for his other plays. That is, Shakespeare wrote 38 plays and, while his skill was well above average, and therefore his average work is higher than the average play, they're not all Hamlet. But one of them was. He didn't write a hundred plays and then only publish the best, he wrote 38 and then published them all and then got famous for the best few (which in turn drove interest in the rest above what they actually deserve on their own merits).
In-so-far as AI is likely to vary less in "author" talent since whatever the most cutting edge models are will be widely copied, we should expect less variance in the quality of individual works. But there will still be plenty of variation, especially as people get better at finding the right prompts and fine-tuning to create different deliberate artistic styles (and drop that stupid em-dash reliance).
I tentatively agree that there are limits to this. If you took AI from 5 years ago there is no way it would ever produce anything publishably good. If you take AI from today I don't think it could ever reach the upper tier of literature like Shakespeare or Terry Pratchett. However this statistical shotgun approach still allows one to reach above their station. But the top 1% of AI work today might be able reach Twilight levels, and if each of those has a 1 in million chance of going viral and being the next Twilight, then you only need to publish a hundred million of them and hope you get lucky. Clearly we've observed that you don't need to be Shakespeare in order to get rich, its as much about catching the public interest and catering to (and being noticed by) the right audience as it is about objective quality, and that's much more a numbers game.
I do think that AI lacks the proper level of coherence and long-term vision to properly appeal to a targeted audience the way something like Twilight or Harry Potter does. But a human curator (or possibly additional specialized AI storyboard support) could probably pick up the slack there (although at that point it's not quite the shotgun approach, more of a compromise between AI slopping and human authorship, and mixes the costs and benefits of both)
It also amplifies the effect through the amplified productivity. That is, you can achieve greater success with a lower mean quality, because instead of having a thousand humans write a thousand works and then pick the best one, you can write ten million AI works and then pick the best one, allowing you to select more standard deviations up. Which means that there will be literal millions of AI slop work of very low average quality just in the hope that one will rise to the top.
This makes discovery a lot harder and waste more time from pioneers reading slop in order to find the good stuff.
Kind of. I guess it's Berkson's paradox applied to a specific class of cases where the the output is easy to observe (and often just "this is a big enough deal for me to have noticed it"), and the variable you care about is harder to directly observe than other variables.
This reminds me of a post I made about grassroots movements and the math of why that trait matters. If you have two variables x,y which combine to create some output f(x,y) which is increasing with respect to both x and y, (as a simple example, f = x * y ) then observing one of the variables to be large decreases your estimate on the size of the other one. (Ie, if you know f and y, but can't observe x directly, you estimate x = f / y). Or more generally you construct a partial inverse function g(f,y), and then g will be decreasing with respect to y.
In less mathematical terms, you observe an effect, you consider multiple possible causes of the effect, then one of them being high explains away the need for the others to be high. In the grassroots example: there are lots of protestors, this could either be caused by people being angry, or by shills throwing money around to manufacture a protest (or maybe a combination of both), you observe shills, then you conclude people probably aren't all that angry, or at least not as angry as you would normally expect from a protest of this size (if they were, and you had both anger AND shills then the protest would be even larger).
In this case, you observe a post about a political event which is getting a lot of attention, f. This popularity could be caused by a number of things, such as insightful political commentary (x), or hot woman (y). You observe large y, this explains the popularity, your estimate of x regresses to the average. It need not be the case that hotness and attractiveness actually correlate negatively, or at all, for this emergent negative correlation to appear when you control for popularity/availability.
All of those are strong possibilities that I think a lot of AI doomerists underestimate, and are the main reason why I think AI explosion to infinity is not the most likely scenario.
But I definitely believe that less strongly than I did 10 years ago. AI has improved a lot since then and suggest that things are pretty scalable at least so far.
We've been trying to make ourselves smarter for a long time
What? We have basically no forms of self-modification available whatsoever. You can study and reason, I guess, which is vaguely like adding training data to an AI. You can try Eugenics, but that's highly controversial, incredibly slow, and has not been tried at scale for long enough. Hitler tried and then people stopped him before he could get very far. Gene editing technology is very new and barely used due to controversy and not being good enough and taking decades to get any sort of feedback on.
We have NOT been "trying to make ourselves smarter" in the same way or any way comparable to an AI writing code for a new AI with the express purpose of making it smarter. What we have been doing is trying to make AI smarter with more powerful computers and better algorithms and training and it has worked. The AI of this year is way smarter than the AI of last year, because coders got better at what they're doing and made progress that made it smarter. If you have more and better coders you get smarter AI. We can't do that to humans... yet. Maybe some day we will. But we don't have the technology to genetically engineer smarter humans in a similar way, so I don't know what sort of comparison you're trying to make here.
I'm not sure how you could be confident of that because the entire point of "fast takeoff" is the nonlinearity. It's not saying "AI is going to improve steadily at a fast pace and we're going to get 10% of the way closer to lightcone speed each year for 10 years. It's "At some point, AI will be slightly smarter than humans at coding. Once it does that, it can write its own code and make itself smarter. Once it can do that, growth suddenly become exponential because the smarter AI can make itself smarter faster. And then who knows what happens after that?"
I'm not 100% convinced that this is how things play out, but "AI is gradually but slowly getting better at coding" is weak evidence towards that possibility, not against it.
- Prev
- Next

Why would you think this? Every year it gets better at this sort of thing. Clearly, it is beyond the level of current AI, but I don't see how you make the leap to "fundamentally beyond" when this seems like exactly the sort of thing that you could do by explicitly layering various theories and techniques together. Maybe you have 20 different sub-AI each of which is an expert in one theory and technique and then you amalgamate them together into one mega AI that can use all of those techniques (with some central core that synthesizes all of the ideas together). I don't know that that's definitely possible, but I can't see any evidence that it's "fundamentally" beyond AI just because they can't do it now. A couple years ago AI couldn't figure out prepositions like putting a cat on top of a horse vs putting a tattoo of a cat on a horse and people said that was "fundamentally beyond AI" because they've never encountered the real world and don't understand how things interact, but now they can usually do that. Because they got better.
More options
Context Copy link