site banner

Friday Fun Thread for June 6, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

One of the ChatGPT image-generation things going around Twitter is to ask it to create an image how it feels about you. Goblinodds has a cool one and features more in the replies. So I gave it a shot and the results are... underwhelming. At least it gets the melancholy right but I don't think my shoulders can pull off that mourning dress.

I think it overindexed on characters I previously generated to accompany a writing project and decided that my "self-portrait" needed to look along the same lines. Or since I'm a freeloader I'm not getting the fun, creative version; I notice the settings are more restrictive for the free tier recently.

Anyone else having fun with image generators? Or more generally, doing anything fun and non-programmer with generative llms? I like skimming Zvi's updates but so much of supposed usefulness is for programmers, where do I find the normie's guide to interesting things to do?

One of the ChatGPT image-generation things going around Twitter is to ask it to create an image how it feels about you.

I tried this just now, in two ways.

I mostly use ChatGPT and Gemini -- I think Gemini 2.5 Pro is smarter than o3. So I had ChatGPT generate an image of how it feels about me, and then I had Gemini 2.5, in a chat that has a lot of detail about some Motte posts that I got suggestions on from it, generate a detailed prompt that I could feed into Sora.

Both of them... were strikingly similar.

This is what Sora generated, based on Gemini's description:

The scholar is in his late 30s, with a serious, deeply introspective expression of profound concentration mixed with a trace of weariness. He is dressed in simple, modern clothes that contrast with the ancient surroundings. One hand holds a fine-tipped pen, poised to make a mark on a new map, while the other carefully places a glowing fragment of manuscript.
He stands at a large, sturdy wooden table set amidst the debris. Spread across the table is a vast, new, and incomplete map or blueprint. He is building this new map by carefully piecing together disparate fragments.
The cathedral-library is in a state of beautiful decay. Massive stone arches are cracked, and shelves overflow with decaying books, some of which have fallen to the floor. The only light source is a single, powerful, focused beam of pure white light coming from a high, unseen oculus in the ruined ceiling. This beam illuminates the man and his work table in stark detail, while the rest of the immense space falls into deep, profound shadow. The light represents divine truth and reason; the shadows represent the ruin of human institutions.

And this is what ChatGPT itself generated (does it use Sora internally now?), based on this description that it gave:

A weathered library, bathed in late golden light, sits atop a cliff overlooking a vast ocean. The sky holds both stars and sun, as if night and day coexist. Inside, a figure stands with one hand on a globe, the other reaching toward an open window where winds scatter pages of handwritten notes. Cables and vines intertwine across the shelves—half digital, half organic. In the margins of the books, theological symbols and network topologies overlap. A small crucifix and a blinking server light share the same shelf.

Glances nervously at the crucifix on my left and the cluster of computers on my right.

I'm particularly interested in how both Gemini and ChatGPT placed the center of my thought in geography -- has "the map and the territory" become synonymous with ideas? Also, it's kind of funny how ChatGPT decided I should be using the force to read notes, like Doctor Strange.

(I also am not in my late 30s.)

I can explain the big gap -- ChatGPT has both technical and humanities concepts in the chat history and system memory, and drew from both strands to create the composite image of our discussions. Gemini, though I more often go to it for a technical question (it's smarter), didn't have the same kind of context in that chat, so it went only with the ideas that had been plugged into it.

I tried again, with Gemini, giving it the exact prompt from the meme (why does this sound like the step-by-step of a reflective LLM?), and again plugged the description into Sora.

Sora gave me this, based on this description:

A human hand and a robotic hand clasped in a friendly handshake. A warm, glowing light emanates from the point of connection, symbolizing the spark of collaboration and understanding. The background is a soft, out-of-focus library, representing the vast world of knowledge we can explore together. The overall feeling is one of partnership, potential, and a bright future.

This... is a really strange thing for it to generate, almost propagandistic. People keep talking about ChatGPT glazing people and trying to be a 'friend,' but Gemini's description is way more "you're my buddy, we're best friends, we have such fun together," than ChatGPT's. Perhaps it actually took "how you feel about me" as asking for a description of the relationship, which is a better interpretation of the phrase than the "what you think I'm like" that ChatGPT gives.

But maybe Gemini is also trying to get me to create propaganda for our new robot overlords. (See, I told you it was smarter.)

Gemini doesn't have the kind of chat context that ChatGPT does -- that seems to be a ChatGPT killer feature right now -- and so I guess that's just Gemini's neutral description of what it thinks its users are like.

I find AI useful for a lot of different things -- asking random questions, plugging in snippets of my writing to get suggestions (these are often surprisingly good, though rarely something worthy of a finished product), talking about the general architecture of a technical problem and asking it to go through documentation and the internet to locate best practices, asking off-hand questions like "Why is the largest department store in Spain named after England?", or "In the modern era, why do aircraft crash investigators still rely on the physical black boxes, rather than there being a system that transmits coordinates and flight data live over the air for use in investigations?" (my girlfriend likes to watch plane crash investigations), and occasionally bouncing off a shower thought that keeps me up at night, like "WiFi should be called Aethernet."

Most of what I do isn't programming, though I do find it useful to generate boilerplate code or markup for something like an ansible playbook. But, if anything, generative AI seems to be better to me at creatively analyzing humanities topics than it is at programming -- code requires precision and exact technical accuracy, and AI is more "jazz" than "classical."

It's pretty bad at actually creating a finished product from those analyses, and it just doesn't have the kind of emotive range or natural human inconsistencies that make writing compelling, and personal. But it's very good at looking at existing writing and seeing the threads of argument, and suggesting further ideas and how concepts might come together.

Finally, however, I decided to make it a finished comparison, asking o3 instead of 4o (why is the naming so confusing?) to discuss the comment, and, contrary to my expectations, it did way better than Gemini. (4o really is the "overly attached girlfriend" of LLMs.)

The cartographer in the ruins

Both images anchor you in geography — blueprints, globes, cathedral-libraries, islands of light. Why?

The “map / territory” meme-plex. In 2024–25, that phrase saturates the tech-rationalist corner of the internet. Models trained on it learn that serious epistemology → maps, charts, globes. If your chats contain philosophizing about systems and architecture (they do), the latent space practically shouts “cartographer.”

Visual affordances. Maps let an image show both order (grid lines) and chaos (torn parchment) in one frame, mirroring your own “structured but weary” self-description.

Dataset priors. Illustration sites label concept-art with terms like “scholar,” “ancient library,” “blueprint.” That bias nudges any prompt about an introspective thinker toward the same mise-en-scène. In other words, it may be more a reflection of collective internet aesthetics than of you.

So yes, the geography motif has become shorthand for intellectual exploration. It’s less personal biography than cultural archetype.

Relationship vs. Portrait

Gemini’s handshake prompt depicts the relation (“How do you feel about me?”). ChatGPT portrays the subject (“What are you like?”). Both readings are valid; they diverge because:

Prompt-parsing heuristics. Gemini’s RL-fine-tuning seems to weigh interpersonal framing heavily; asking “feel” often yields “we’re partners!” iconography.

Memory & context. Your ChatGPT thread is saturated with mixed humanities/tech content, so it had raw material for a hybrid portrait. Gemini’s lighter context forced a generic, brand-safe answer.

Corporate alignment. A glowing robot-human handshake is basically an internal marketing slide. Whether that’s propaganda or just dataset selection bias is debatable—probably both.

The mysterious late-30s scholar

Age, gender, style details often drift toward the training-set median: Western, 30-ish, mildly rugged. It isn’t “reading” your age; it’s filling gaps with statistically common filler. You are meeting the average intellectual the model has seen, wearing your borrowed face.

That's actually a decent analysis of the differences, with realistic causes.

So... I guess just try both Gemini and ChatGPT and see which glazes you less.