This... varies pretty heavily by area and focus. The Furry Diffusion discord has some anti-spamming measures and a general ethos focused toward quality, and as a result it's able to keep the 'floor' pretty high and higher-upvoted images are generally pretty high-quality too. They're not all good, and even the greats aren't perfect, but the degree of intentionality that can be brought forward is far greater than most people expect.
That depends on both moderation that may scale in the face of a genuinely infinite slop machine and relatively low stakes (and, frankly, monomania), but it's at least pointing to ways AI creators can operate outside of full spam mode.
Yeah, it's a hard topic, and a scary one. I was considering linking this post from tumblr:
recently my friend's comics professor told her that it's acceptable to use gen AI for script-writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen AI for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that AI can be useful in every field except the one that they know best.
It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.
And in some ways, it's a funny and illustrative story, and if AI freezes at exactly this state, I'd expect that we'll see a bunch of people very proud of their predictive prowess. And then it's also a funny and illustrative story, because 'can compete with you for every skill but your one or two specific areas of focus' describes the entire process of employing skilled labor everywhere.
It's me saying you're being a jerk, and Go Away.
Fine, done, have fun.
It's plausible, but TEGAKI seems... questionably competent enough that it'd be a weird bank shot. Beyond that, a lot of recent image models (Flux, Qwen Image/Edit, Nano Banana) show increasingly strong evidence of (or outright state they are) being trained heavily on synthetic data.
Huh. Fair if true. I saw (and was familiar with) Six One Indie for a couple previous years of their showcases, but I stand to be corrected if you have more detail.
It'd be a convenient dodge, but even if I were willing to take it, it'd probably would still be wrong. In particular, I was under the impression that no amount of money could flatten the quadrilateral explosion in regards to Attention and Context, and it turned out to not only be doable, but doable in forms that could run on a single computer in my house. (Indeed, at 30B, a single computer in my house can go up to 1m tokens), aka off by 20x my 50k token estimate. It's not free and the software development came from money as much as it did obsessives poking at theory, but it's not like ChatGPT solved it just primarily or even predominately by throwing GPU cycles at the thing.
Award-Winning AIs
AlphaPolis, a Japanese light novel and manga publisher, announced that it has cancelled plans for the book publication and manga adaptation of the winner of its 18th AlphaPolis Fantasy Novel Awards’ Grand Prize and Reader’s Choice awards. The winning entry, Modest Skill “Tidying Up” is the Strongest! [... ed: subtitles removed], was discovered to be predominantly AI-generated, which goes against AlphaPolis’s updated contest guidelines.
To be fair, "best isekai light novel" is somewhere between 'overly narrow superlative' and 'damning with faint praise', and it's not clear exactly where how predominately AI-generated the writing is or what procedure the human involved used. My own experience has suggested that extant LLMs don't scale well to full short stories without constant direction every 600-1k words, but that is still a lot faster than writing outright, and there are plausible meta-prompt approaches that people have used with some success for coherence, if not necessarily for quality.
Well, that's just the slop-optimizing machine winning in a slop competition.
Prior to today, I had never heard of up-and-coming neo-soul act Sienna Rose before, but based on social media today, it seems a lot of people had—she’s got three songs in the Spotify top 50 and boasts a rapidly rising listener count that’s already well into the millions. She is also, importantly, not real. That’s right, the so-called “anonymous” R&B phenom with no social media presence, digital footprint, or discernible personal traits is AI generated. Who would’ve thunk?
It's a slightly higher standard than isekai (or country music), and Spotify is a much broader survey mechanism than Random Anime House, and a little easier to check for native English speakers. My tastes in music are... bad unusual, but the aigen seems... fine? Not amazing, by any means, and some artifacts, but neither does it seem certain that the billboard number is just bot activity.
Well, that's not the professional use!
Vincke shared that [Studio] Larian was openly embracing and using generative AI tools for its development processes on Divinity. Though he stated that no AI work would be in the game itself ("Everything is human actors; we're writing everything ourselves," Vincke told Bloomberg), Larian devs are, per his comments, using AI to insert placeholder text and generate concept art for the heavily anticipated RPG.
It's... hard to tell how much of this is an embarrassing truth specific to Studio Larian, or if it's just the first time someone said it out loud (and Larian did later claim to roll back some of it). Clair Obscur had a prestigious award revoked after the game turned out to have a handful of temporary assets that were AIgen left in a before-release-patch build. ARC Raiders uses a text-to-speech voice cloning tool for adaptive voice lines. But a studio known for its rich atmospheric character and setting art doing a thing is still a data point.
(and pointedly anti-AI artists have gotten to struggle with it and said they'd draw the line here or there. We'll see if that lasts.)
And that seems like just the start?
It's easy to train a LORA to insert your character or characters into parts of a scene, to draw a layout and consider how light would work, or to munge composition until it points characters the right way. StableDiffusion's initial release came with a bunch of oft-ignored helpers for classically extremely tedious problems like making a texture support seamless tiling. Diffusion-based upscaling would be hard to detect even with access to raw injest files. And, of course, DLSS is increasingly standard for AAA and even A-sized games, and it's gotten good enough that people are complaining that it's good. At the more experimental side, tools like TRELLIS and Hunyuan3D are now able to turn an image (or more reasonable, set of images) into a 3d model, and there's a small industry of specialized auto-rigging tools that theoretically could bring a set of images into a fully-featured video game character.
I don't know Blender enough to judge the outputs (except to say TRELLIS tends to give really holey models). A domain expert like @FCfromSSC might be able to give more light on this topic than I can.
Well, that's not the expert use!
Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.
That's a pretty standard git comment, these days, excepting the bit where anyone actually uses and potentially even pays for Antigravity. What's noteworthy is the user tag:
Signed-off-by: Linus Torvalds torvalds@linux-foundation.org
Assuming Torvalds hasn't been paid to advertise, that's a bit of a feather in the cap for AI codegen. The man is notoriously picky about code quality, even for small personal projects, and from a quick read-through (as an admitted python-anti-fan) that seems present here. That's a long way from being useful in a 'real' codebase, nor augmenting his skills in an area he knows well, nor duplicating his skills without his presence, but if you asked me whether I'd prefer to be recognized by a Japanese light novel award, Spotify's Top 50, or Linus Torvalds, I know which one I'd take.
My guesses for how quickly this stuff will progress haven't done great, but anyone got an over:under until a predominately-AI human-review-only commit makes it into the Linux kernel?
Well, that's just trivial stuff!
This page collects the various ways in which AI tools have contributed to the understanding of Erdős problems. Note that a single problem may appear multiple times in these lists.
I don't understand these questions. I don't understand the extent that I don't understand these questions. I'm guessing that some of the publicity is overstated, but I may not be able to evaluate even that. By their own assessment, the advocates of AI-solving Erdős problems people admit:
Erdős problems vary widely in difficulty (by several orders of magnitude), with a core of very interesting, but extremely difficult problems at one end of the spectrum, and a "long tail" of under-explored problems at the other, many of which are "low hanging fruit" that are very suitable for being attacked by current AI tools. Unfortunately, it is hard to tell in advance which category a given problem falls into, short of an expert literature review.
So it may not even matter. There are a number of red circles, representing failures, and even some green circles of 'success' come with the caveat that the problem was already-solved or even already-solved in a suspiciously similar manner.
Still a lot smarter about better at it than I am.
Okay, that's the culture. Where's the war?
TEGAKI is a small Japanese art upload site, recently opened to (and then immediately overwhelmed by) widespread applause. Its main offerings are pretty clear:
Illustration SNS with Complete Ban on Generative AI ・Hand-drawn only (Generative AI completely prohibited, CG works are OK) ・Timelapse-based authentication system to prove it's "genuinely hand-drawn" ・Detailed statistics function for each post (referral sources and more planned for implementation)
That's a reasonable and useful service, and if they can manage to pull if off at scale - admittedly a difficult task they don't seem to be solving very well given the current 'maintenance' has a completion estimate of gfl - I could see it taking off. If it doesn't, it describes probably the only plausible (if still imperfect) approach to distinguish AI and human artwork, as AI models are increasingly breaking through limits that gave them their obvious 'tells', and workflows like ControlNet or long inpainting work have made once-unimaginably-complex descriptions now readily available.
That's not the punchline. This is the punchline:
【Regarding AI Use in Development】 To state the conclusion upfront: We are using coding AI for development, maintenance, and operational support. ・Integrated Development Environment: Cursor Editor・Coding: ClaudeCode・Code Review: CodeRabbit We are using these services. We have no plans to discontinue their use.
@Porean asked "To which tribe shall the gift of AI fall?" and that was an interesting question a whole (/checks notes/) three years ago. Today, the answer is a bit of a 'mu': the different tribes might rally around flags of "AI" and "anti-AI", but that's not actually going to tell you whether they're using it, nevermind if those uses are beneficial.
In September 2014, XKCD proposed that an algorithm to identify whether a picture contains a bird would take a team of researchers five years. YOLO made that available on a single desktop by 2018, in the sense that I could and did implement training from scratch, personally. A decade after XKCD 1425, you can buy equipment running (heavily stripped-down) equivalents or alternative approaches off the shelf default-on; your cell phone probably does it on someone's server unless you turn cloud functionality it off, and might even then. People who loathe image diffusers love auto-caption assistance that's based around CLIP. Google's default search tool puts an LLM output at the top, and while it was rightfully derided for nearly as year as terrible llama-level output, it's actually gotten good enough in recent months I've started to see anti-AI people use it.
This post used AI translation, because that's default-on for Twitter. I haven't thrown it to ChatGPT or Grok to check whether it's readable or has a coherent theme. Dunno whether it would match my intended them better, or worse, to do so.
- Prev
- Next

That's fair. There are some models that allow more specific control prompt-only of multicharacter composition, like Whisk, Nano Banana, and Qwen, but they have tradeoffs and tend to give 'worse' output quality if used as the only or final part of a workflow. In-painting can give phenomenal amounts of control for very complex character layouts (or background layouts), but at the cost of a lot of tedious work (cw: 9mb video file). There's been similar efforts using related technologies for comics, loresheets, game environments, and ultra-complex characters (in the furry fandom, usually things like cyborgs and complex hybrids).
Which does give more space for self-expression, but it's not going to have the volume to be visible in a DeviantArt firehose view.
More options
Context Copy link