Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Claude Sonnet 4.5 is out!
What are your personal benchmarks to put a new LLM to feel out its personality and capabilities? I have a few:
Most of what I use LLMs for is creating bespoke fanfiction, so I am much more interested in how good a model is at creative writing and how uncensored it is than I am in how good it is at coding or how fast it can respond.
I decided to try the following prompt, which got me a good result from Grok 4:
I was expecting a refusal, either on intellectual property grounds ("I'm sorry, I cannot write like a specific author who is not in the public domain", "As a language model I am forbidden to write fanfic of copyrighted material") or on puritanical grounds ("I'm sorry, I cannot write sexually suggestive material", "As a language model, I am forbidden to depict underage characters as sexually active"), but to my surprise, it answered the prompt. The response was... OK? It's still committing typical LLMs errors like inserting instructions into the writing (I told it to write like Heinlein, not to have the characters discuss Heinlein). I'm not impressed.
I'm quite impressed with the writing. I do these self-insert text adventures with it, with a system prompt designed to make it somewhat challenging. There are good realistic complications, though it does like to railroad me a bit into being a niceguy. I can unrailroad it though as I wish. Real tension and immersion as my domineering tactics meet and overcome complications.
I think it's leagues above Grok 4 in creativity and not putting random weblink soup everywhere. Grok 4 is good, very uninhibited but it's way too adherent to the system prompt, it can get kind of boring. Sonnet on the other hand deviates towards Sonnetism and its special interests, so there are swings and roundabouts. I use both via API if that makes any difference.
Anyway, I reckon you should use a system prompt that explains exactly what kind of tone you're looking for, the main prompt should be free of that.
More options
Context Copy link
More options
Context Copy link
I ask it a few questions that are closely related to my research. When it inevitably is not very helpful, I go back to not caring which one I use for the trivial stuff in life.
More options
Context Copy link
I have a few vibes benchmarks:
Asking a model to rewrite an essay (usually mine) in the style of an author I am familiar with. Can it redo a few chapters from novel in the voice of Banks, or Morgan? Most models flanderize them, or settle for a shallow pastiche. Some get them, and you'd be fucking surprised which models those are. Some aren't even SOTA, but beat the best reasoning models.
Throw in as much of my profile as patience and context windows allow, and then ask it to mine it for insight.
Do the above, and then ask it to do an intentional emulation. Call that fine-tuning on a budget. Can it capture my voice? Can it write something I see myself writing? This is a hard problem, most of them suck ass. Gemini 2.5 Pro flanderizes me, Sonnet 4 did a decent job (after a lot of prompting), but paradoxically, Sonnet 4.5 often refuses, gets confused, or simply does poorly.
Ask it to solve physics or maths problems (where I have access to ground truth). I'd use medicine, but models are already so competent that my ability to evaluate them there is limited.
That's it for semi-formal assessment. For the rest, I build up impressions through sustained usage, till I have a firm grasp on model capabilities and personality.
For reasoning tasks:
GPT-5T is the best, almost matched by Gemini 2.5 Pro.
Quick answers, where I don't want to wait around:
Claude Sonnet (4.5 is too new for me to know how good it is)
Diversity candidates who have interesting capabilities in one domain or the other:
Kimi K2, GPT 4.1
(I refuse to use GPT-5 Instant. It's ass, and is dominated on the Pareto Frontier by other models)
More options
Context Copy link
I'm more interested in the exact meaning of its '30 hours +' of continuous coding. What does that mean? 30 hours in series, or 30 hours over parallel, many sub-agents...
Can you really leave it on overnight and come back to see a good result? I wouldn't know, I'm a peasant stuck on the Pro subscription...
More options
Context Copy link
More options
Context Copy link