site banner

Friday Fun Thread for October 24, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

I assumed that the "private" free tier of most commercial models do not actually allow you keep the full thread in the context, and that the performance for summarizing a long and meandering thread would be degraded if it was not keeping the whole thing in context. I don't recall exactly, but I think the thread was like 25k tokens. It's possible the free online models have expanded to that allowed context size by now, I haven't done a comprehensive survey. If not the best case is they reject the prompt because it's too large. The more tricky case is the model "forgets" important parts of the thread because it falls out of context.

I didn't want to spend money (okay I spent like $0.03 of electricity) or have real name accounts associated with something quite so depraved as researching "Are traps are gay?" I suppose I could make a burner somewhere, but seemed like more work. If it was an important thing I simultaneously really didn't want to pipe to an API endpoint I have no control over I would have used a better model. I might have managed qwen3-30b-a3b-thinking-2507 or openai/gpt-oss-120b (probably good enough?), but that combination is so rare I'm too cheap to buy good enough hardware to be able to run that kind of thing in a decent time span. I also thought that excepting Grok, the main commercial models might be too nerfed to tell you if traps are gay. In retrospect I should have chosen a spicier local model for the goal of maximum luz.

So yes, the paid commercial models are likely to produce a better summary than I provided. I don't have 100% confidence that they are not going to associate even private chat queries with my real identity when they decide who to mark for liquifaction though.

I can confirm without a shadow of a doubt that Gemini 2.5 Pro (free on AI Studio) allows for a full million tokens of CW. That can fit a CWR thread, maybe every single one this year. The limiting resource is usually my phone refusing to copy and paste the entire text of a very long thread, which I could avoid by either linking or downloading a page as PDF.

I've had no issues with ChatGPT either, which I pay for, or Claude and Kimi which I mooch off.

I'd recommend Gemini on AI Studio. It's literally free, and unless you have excellent opsec, Google probably knows your precise bowel movement timings already. It also saves chat history to private G Drive storage, which means the verbatim conversations are deleted (unless they were trained on already, which is a much lossier process).