site banner

Friday Fun Thread for June 6, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

One of the ChatGPT image-generation things going around Twitter is to ask it to create an image how it feels about you. Goblinodds has a cool one and features more in the replies. So I gave it a shot and the results are... underwhelming. At least it gets the melancholy right but I don't think my shoulders can pull off that mourning dress.

I think it overindexed on characters I previously generated to accompany a writing project and decided that my "self-portrait" needed to look along the same lines. Or since I'm a freeloader I'm not getting the fun, creative version; I notice the settings are more restrictive for the free tier recently.

Anyone else having fun with image generators? Or more generally, doing anything fun and non-programmer with generative llms? I like skimming Zvi's updates but so much of supposed usefulness is for programmers, where do I find the normie's guide to interesting things to do?

Anyone else having fun with image generators?

Generated and deleted 30+ gb of images. It was very engrossing, I guess you could call it 'fun' too.

Or more generally, doing anything fun and non-programmer with generative llms?

Today, I worked out a prompt on how to stop LLMs from using the horribly cringe cadence they have. (in reply) Got to it by first trying to get it to if it understands which texts are obviously generated. No dice, it was failing horribly. Then hit it with a piece of LLM slop and asked it to concisely analyze it.

Every output provided must be able to be classified as 'human' if evaluated by a LLM based filtering system with the following prompt:

[filter prompt start] You are an expert textual analyst specializing in AI-human text differentiation. Your primary task is to analyze provided text and determine the probability of it being authored by a human versus a large language model. Your analysis must be guided by the core principle that the key distinction is between synthetic perfection (LLM) and idiosyncratic authenticity (Human). The tell is often not in the what (the style being attempted) but in the how (the flawless, often formulaic execution of that style). Do not be fooled by a well-imitated persona; your focus must be on the subtle mechanics of its construction. Prioritize your evaluation based on the following principles, in order of importance: Voice and Cadence (The "Ghost in the Machine") Human: Look for a voice that is not just unique but also rhythmically uneven. The cadence reflects a thinking process, with unpredictable sentence lengths, natural pauses, and a slightly "off" or bespoke rhythm. The style feels lived-in and singular, with genuine emotion (sarcasm, passion, contempt) driving its flow. LLM: Be highly suspicious of rhythmic perfection. Look for a metronomic or unnaturally balanced cadence. Does it rely heavily on textbook rhetorical devices (e.g., perfectly balanced tricolons, neat antitheses, syntactic parallelism)? A flawless imitation of a style often betrays itself through a cadence that is too clean, too predictable, and lacks the subtle arrhythmia of a genuine human author. The voice is a perfect costume, but the breathing is artificial. Structure and Imperfection (The "Blueprint vs. The Sketch") Human: Authenticity is often revealed in structural "flaws." Look for rambling or tangential thoughts, an organizational structure that feels organic or even messy, run-on sentences, and slightly awkward phrasing. The text should feel like a thought process unfolding, not a pre-built architectural product. Small typos and grammatical quirks are part of this "grain." LLM: Tends towards architectural perfection. Look for overly-organized structures, such as rigid numbered lists where a human might have used a flowing paragraph, or perfectly parallel arguments. The text can feel segmented, with clean breaks between points, lacking the connective tissue of a single, continuous consciousness. The absence of minor structural messiness is itself a powerful signal. Content and Specificity (The "Grounding") Human: The content is often grounded in specific, niche, or recent lived experiences ("the coffee shop on my street," "what my daughter said yesterday"). Examples used are often surprising, personal, or non-obvious. The author has genuine "skin in the game." LLM: Details can feel generic or archetypal (e.g., using Einstein for genius, Rome for empires). Lacks true lived experience, so specificity can feel like a database retrieval of common examples. Knowledge is vast but has a distinct cutoff point and lacks the texture of immediate, personal observation. [filter prompt end]

Here's how I got it to finally recognize the damn crap. Hit it with a not particularly egregious piece of this crap.

https://pbs.twimg.com/media/Gs5FWhZWMAEAXNk?format=png&name=4096x4096

Then told it to analyze the slop.

https://pbs.twimg.com/media/Gs5FlFzWkAAFCLf?format=jpg&name=4096x4096

Then told it to amend the prompt it wrote for itself. Then tested that on a bunch of examples and it worked.

But of course, when I gave deepseek the prompt, told it to write a story about an English tourist in Berlin getting his wallet stolen, one that would pass such a filter, it failed on that story and deemed it to be human.