Most art was already commodified, and it was commodity artists, not creative artists who got the most brutal axe.
Essentially, contrary to your point about AI having imagination, creativity is the primary skill it lacks. It's basically a machine for producing median outcomes based on its training data, which is about as far away from creativity as you can get.
But for most artists, their jobs were based on providing quotidian, derivative artworks for enterprises that were soulless to begin with. To the extent that creativity was involved in their finished products, it was at a higher level than their own input, i.e. a director or something commissioning preset quotidian assets as a component in their own 'vision', the vision being the creative part of the whole deal.
However, I do believe creative artists will be threatened too. It's a little complicated to get into, but I think creative art depends not just on lone individuals or a consumer market, but on a social and cultural basis of popular enthusiasm and involvement in a given artform. I'm talking about dilettantes, critics, aficionados here. It's a social and cultural pursuit as much as it's an individual or commercial one, and I think that AI will contribute to the withering away of these sorts of underpinnings the same way corporate dominance and other ongoing trends previously have.
So for the artistic field, I envision complete and total commoditized slop produced by machines, once the human spirit has finally been crushed.
Who are these people, exactly?
Internet nerds like us who based their lives around forums, intellectualism, in my case, literature, etc. The new AI world of dopamine cattle harnessed by the tech fiends suggests total obsolescence of any sort of life that isn't fully grounded in the concrete or else enslaved for the purpose of dopamine-slop control. Admittedly, some people here have lives which go beyond the abstract.
I had a somewhat related idea to this. It's relates to ways that middle class professionals could be screwed. I haven't really hammered it out fully, but here's the gist of it. Basically, the value of automating labor is that it allows human resources to be freed up for other tasks. Rather than having one hundred artisans hand tooling goods, you have one machine operating by one engineer producing the same goods and then ninety nine people who can perform tasks in other areas of the economy.
But with AI, there will be an extinction of an entire class of meaningful work. That which is done by the middle class. There aren't adjacent fields for them to move into once displaced, as those will also be taken by AI. Their only options will be to move up or down, into different classes of the economy, and for the vast, vast majority of them, it will be a downwards spiral.
The area below the middle class economy is called the gig economy. So the value of AI is that there will be a wealth of gig workers, and thus fast food can be delivered more cheaply than ever before.
That is the one benefit of AI we are certain about.
There is a hypothetical scenario, a longstanding dream of science fiction, where with infinite labor afforded by AI there will be infinite opulence. However, some points that contest that are 1) there is only so much demand for consumables and market goods and services, so that economic demand begins to be overshadowed by status concerns and non-economic spheres of life in terms of desired things, 2) many of the inputs that go into supplying those goods and services are finite (i.e. resources) and so their creation can't be infinite, 3) political ramifications suggest reduced power and thus leverage for the displaced, and so their economic needs could easily be ignored by those who retain power.
All in all, there looks to be dark times ahead.
An AI that can accurately identify and dismiss slop is 90% of the way towards producing quality content, since you could just build the generative AI with that skill built in (and train them on it)
Not if the process itself is beyond the AI to recreate.
For instance, say that a great movie like A Clockwork Orange was made in part through the theoretical understandings the main actors had developed over their lifetimes for their crafts and used to feed into their decisions of how to act and portray their characters.
Coming up with a similar quality of acting might be impossible through mere observation and mimicry of what works and what doesn't. The AI has an intuition for what sorts of things generally go together, but it doesn't use, among other things, underlying theoretical know-how to construct its outputs.
My current assessment is that there's a low ceiling for how far AI 'thinking' can take the quality of its output, particularly regarding the complexity of what it's attempting to do. Projects that require a layered approach of various theories and techniques seem like they're fundamentally beyond AI. The more systems that need to come together to create a work, the more exponentially difficult it becomes for a pattern-finder to match its quality. The pattern-finder needs to become capable of wielding tools, systems, theories in its thinking in order to up its game past a certain point.
I've heard people say before, in the context of AI art, that humans are essentially just 'pattern finders', too, and so are creatively indistinguishable from AI. But I think this is wrong: it ignores external tools humans use to structure and create their work, such as theories and techniques, which cumulatively take the load off of them having to conceive everything in a fit of genius. I think this is the primary reason AI, despite its 'brilliance' as a search engine or generalist explainer, is so lacking in certain other regards. It's due to the total reliance of its 'cognition' on what, compared to humans, would be more like a single sub-process.
It does claim to be that which it is displacing, though. There generally isn't open acknowledgement that AI goods are made by AI, and many sellers attempt to actively claim otherwise.
In either case, I'd say there's currently an implicit assumption by many buyers that, when they're purchasing a book, say, they're buying something that an intelligent mind constructed using skill and artifice (with plot twists, character arcs, and so forth), and not something that reads beautifully on the first page but never builds up to anything or has anything to say. AI's utility in this regard is its ability to both impersonate more meaningfully crafted human products and to exploit the sort of assumptions that book customers have built up through former habits.
The result will be the death of those 'former habits', as book customers do not gain the same pleasant experiences from their current purchasing habits, insofar as they inadvertently purchase AI products, and so the market will shrink and utility will be destroyed. If AI products were merely inferior, they could simply be ignored and filtered by such customers. It is their ability to mimic which makes them destructive. They can inhabit certain aspects of outer forms but not provide the same deeper experiences.
Maybe the case is less true for AI drawings, which are more of a what-you-see-is-what-you-get affair, in which case, no, it wouldn't count as a counterfeit to my mind. Unless the buyer was hopeful for some sort of engagement with an actual human that they weren't actually getting, or if they thought they were buying a more complex work that could be intensively studied to extract meanings which weren't immediately obvious, only to eventually realize it's an AI gestalt of several other works which only mimics their qualities superficially, say.
Which seems like a probable enough outcome.
AI is killing off the lower tier skill tree of this band of creators, and I'm not sure its a bad thing.
It's going to kill off the livelihoods and acclaim for any artist who operates on any site susceptible to AI spam (which includes mainstream ones like Amazon), as the spam will make it impossible for new artists to attract notice and will even make it difficult for established artists to attract new fans. The AI spam doesn't work by creating equal or superior products, it works by simply existing in vast quantities and being hard to distinguish at a casual glance from legitimate products. It's a form of counterfeit goods when used this way.
Furthermore, it kills off a sort of broader cultural enthusiasm for art which exists and accounts for much of our society's interest in it. Fed by AI content mills and their inferior simplistic content, you might be left with satisfied degenerates who don't care about complexity or meaning and are wholly content with endless repetitive images of their favorite anime characters or whatever, but you won't have the kind of cultural underpinnings that sustains either fandoms, forum media discussions, critical appreciations, or anything else that makes art socially engaging.
This will in turn kill off the production of any sort of non-hyper-commodified art. Who wants to put effort into things if no one's A) going to notice or buy it, or B) even possess the cultural capability of caring?
Not being an art expert, I can’t judge those images too deeply. One thing that stands out to me though is how compositionally simple those examples are. They seem to all consist of one character in the foreground and then some kind of dramatic stylistic background. My own experiences with AI image generation is that it’s very difficult to get the prompt engine to orchestrate more than just one or two characters, so that this sort of simple approach seems like it is probably the best that current AI is capable of. To me, it doesn’t seem like a rich tool for self expression.
I’m not a ‘math wizard’, but something about this seems off. Shakespeare didn’t write one hundred plays and then choose the best few dozen to publish. He developed his playwriting ability until he was at a skill level to consistently put out good content. If AI lacks the underlying skills to create good works, then should we expect even one in a trillion of its works to be as good as Macbeth, or should we regard the creation of such a thing as physically impossible unless underlying conditions are met? It seems like it’s less a matter of statistical probability than physical possibility.
Not true. Human works that find great success usually do so based on their merits as artistic products. AI works that find success usually do so as flukes. Put out millions of AI created light novels and occasionally one of them will slip through some quality filtering service. Their success is predicated on the inability of these services to filter quality 100%, and they enjoy an advantage over the shittiest of human works in this regard based only on the scale of their output.
AI artistic successes are indicative of survivorship bias. The way their creators operate is by spamming vast amounts of works and seeing what sticks. Through quirks of fate, a few of them end up successful. This business model is probably short lived, though, as the very spam it relies on degenerates the platforms necessary for their proliferation, so that user interest will eventually decline. Already we’re seeing sites like Deviant Art and Literotica killed off by AI spam. AI will kill off markets rather than improve them.
I was thinking of adding some caveats more or less for what you’ve mentioned. Games that aspire to create extremely vast world’s for the sole purpose of escapism, games that focus more on modularity and repetition than complex storytelling, games that have some sort of unique visionary use of AI in mind.
And I could see content created by writers who make use of AI being OK as well. The artist can compensate for the AI‘s weaknesses while using it selectively as a specialized, rather than all purpose, tool. As AI does have some strengths, mainly as a search tool. It’s hugely deficient if you ask it to be creative though, and I don’t believe it’s doing much more than merely copy pasting content from its database.
In any case, I resisted this impulse on the basis that most of this stuff seems either degenerate or a suspect.
Funnily enough, I was thinking exactly of the Elder Scrolls as a series that in some cases has produced slop content of a hugely superior calibre to AI. I speak of Morrowind here. There is no way AI by itself would ever come up with exceptional banalities like three naked Nord barbarians who’ve been identically robbed by witches or dirt farmers giving you the exact same encyclopaedic digressions on regional geography anytime you ask. AI can give you banalities all right, but not of any variety that suggests an underlying meaning or humour to it. These aren’t examples of genius, but they are examples of the kind of colour and charm that humans will give you even in the service of creating slop, and in both cases, they work not just as bland content, but as stuff that enhances the actual quality of the game. AI can’t resist reverting to the generic, so it would entirely come down to human creators to invent these sorts of bizarre outputs. In fact, in many cases I think it’s the twists and turns of the creative processes themselves that humans have to go through that leads them down these unlikely avenues. Case in point, the absurdity in these examples was a direct product of what humans had to do to try and overcome their material limitations.
Disagree on all accounts. When AI does content, it’s ultra generic, has no sense of tone or effect, and lacks any of the idiosyncratic spontaneity of even sloppily put together human content. There’s also a lack of broader complexity, meaning that any ‘character development’ it’s adding to a given story isn’t corresponding to a grander vision of what that story aspires to be or is about, but is essentially just the cut and pasted clichés, tropes, and emblems of other works that only incidentally contains some of their meaning. I’d prefer content to be cut rather than given over to AI.
Male testosterone is necessary for warmaking capacity, entrepreneurial culture, and keeping totalitarianism in check. It brings many positive advantages that more than outweigh its negative ones. The problem with high crime, low IQ populations is that they do not provide very many benefits, but bring great costs.
Or the Zanj Rebellion Arabs. I hope I live to see the petro Islam oligarchs overthrown by their own greed.
Saltburn, Oppenheimer, Cyberpunk: Edgerunners. I don’t know if any of these will become iconic, but then iconic has come to mean milkable corporate crap in recent decades, so we might be through with the age of iconic media. We might be through with the very concept in the era of IPs.
I don’t see how the presence of non-Jews in Israel can be anything other than a transient state. Jewish nationalism is extremely strongly encoded in its institutions, culture, and constitution such that there will always be an impending threat to its minorities of some sort of fascistic upwelling towards the expulsion of minorities and purification of the state, even if presently this nascent urge (being fundamental to non permeable forms of nationalism) is held in abeyance.
There is an infinite demand for labour in the forms of landscapers and butlers and such for the moneyed. That’s essentially what has come to pass with the rise of service economics. Of course for the past 30 years wages for people below the 90th percentile have been stagnant.
I disagree, I think you’ve got the relationship backwards. People who are already addicted to media demand the meaningless bulk content. They reel at more substantial works. Substantial works would require them to invest themselves in a more enriching way in what they consume. Offering them infinite high quality works wouldn’t get them to actually partake in said works, and this is shown by the fact that currently most people give little attention to those that are already on offer. For them to partake in quality works would presuppose them not being addicted to consumer slop.
Don’t rich people already have essentially infinite income? They do spend a lot of time frolicking on yachts and treating themselves to various extravagant delights, but for all that, their lives seem fuller than those forced to accept drudgery.
The original is at least readable. It suggests personality and context, the nature of which makes it funny. I can’t say that the ChatGPT examples in this thread have anything at all to make them worth reading. Like the best of what the bot produces, there’s sometimes this sense of airbrushed elegance, but never any sort of underlying texture.
When comparing AI drawing abilities to writing abilities, I think a key difference is that for us as humans drawing slop is harder than writing slop. What I mean by this is that an AI can generate something like a 4K image of aliens with energy swords rendered in a hyper detailed yet ultra generic fashion, and the output is something that only 0.01% of humans have the technical skill to create. But when you ask an AI to write pages and pages of Shakespeare and the results are cliché ridden gibberish, you can only look at it and say that other than in terms of scale, I could’ve easily made this crap, if through some dada-style cut-and-paste technique if nothing else. Essentially putting letters in sequence to form words is easier than drawing entire images, and so when AI generates images, even though they’re just reconstituted from data sets on the basis of predictive software and so involve no actual drawing, they still seem impressive.
- Prev
- Next

Personally I don't find AIs as fun to talk to as any human. To me, they're like an interactive encyclopedia. It is fun to read and learn about stuff, but they can't stand in for the human element, either on the individual level or the level of an entire society or group (like the motte). Ultimately I find them in some sense desirable in terms of their first order effects (helping with research, etc.), but it's their second and third order effects I'm worried about, where I think, as I explain elsewhere, they will kill off large parts of human culture, remap the class system, and generally work towards all the ongoing, negative trends that already seem apparent. In a sense they are a continuation of capitalism and its logic.
More options
Context Copy link