site banner

Small-Scale Question Sunday for August 10, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Is anyone watching the chatGPT 5 "bring back 4o" meltdown on /r/chatGPT and /r/OpenAI?

It's insane. People are losing their shit about 4o being taken away to the point it's back now (lmfao). There's also a huge push of "don't mock others for using 4o as a trusted friend you just don't understand". It's honestly equal parts hilarious and horrifying.

For additional fun, browse the comments, obviously there are idiots on the internet, but these people are cooked.

I had thought the internet collectively agreed that RLHF had resulted in glazing that was a huge issue. But it turns out a sizable amount of people actually loved it.

Also funny, gpt5 can glaze you if you ask it, but I guess the median Redditor complaining about this doesn't understand custom instructions. Similarly, people are clearly giving gpt5 custom instructions to be as robotic as possible and then posting screenshots of it... being robotic.

The whole thing makes me rather worried about the state of western society/mental health, in the same way that OnlyFans "chat to the creator" feature does. We need government enforced grass-touching or something.

GPT-5 is really dumb and basically unusable.

I asked it to write me a yt-dlp command to save all my liked videos into a text file. And it just couldn't do it. For whatever reason, it couldn't write a simple, one-line command. It began creating multiple overcomplicated batch files and calling functions that don't even exist.

For context, this functionality is already built into yt-dlp. Earlier ChatGPT versions, also with "thinking" turned on (!), all worked flawlessly.

I had to resort to Claude, which I previously avoided, but which, this time, instantly gave me the correct answer:

yt-dlp -v --cookies-from-browser firefox --flat-playlist --print "%(url)s" "https://youtube.com/playlist?list=LL" > liked_videos_urls.txt

The same exact thing happened when I tried asking it to compress a PDF using ghostscript and also with basic video manipulation with ffmpeg.

It just went on these unrelated rants with hallucinated commands.

I haven't really used 5 yet so don't have an opinion. But broadly I agree with this Reddit post that AI soft skills are being steadily downgraded in favour of easily-benchmarkeable and sellable coding and mathematics skills.

When I was using 4o something interesting happened. I found myself having conversations that helped me unpack decisions and override my unhelpful thought patterns and things like reflecting on how I’d been operating under pressure. And I’m not talking about emotional venting I mean it was actual strategic self-reflection that actually improved how I was thinking. I had prompted 4o to be my strategic co-partner, objective, insight driven and systems thinking - for me (both at work and personal life) and it really delivered.

And it wasn’t because 4o was “friendly.” It was because it was contextually intelligent. It could track how I think. It remembered tone recurring ideas, and patterns over time. It built continuity into what I was discussing and asking. It felt less like a chatbot and more like a second brain that actually got how I work and that could co-strategise with me.

Then I tried 5. Yeah it might be stronger on benchmarks but it was colder and more detached and didn’t hold context across interactions in a meaningful way. It felt like a very capable but bland assistant with a scripted personality. Which is fine for dry short tasks but not fine for real thinking. The type I want to do both in my work (complex policy systems) and personally, to work on things I can improve for myself.

That’s why this debate feels so frustrating to watch. People keep mocking anyone who liked 4o as being needy or lonely or having “parasocial” issues. When the actual truth is lot of people just think better when the tool they’re using reflects their actual thought process. That’s what 4o did so well.

The bigger picture thing I think that keeps getting missed is that this isn’t just about personal preference. It’s literally about a philosophical fork in the road

Do we want AI to evolve in a way that’s emotionally intelligent and context-aware and able to think with us?

Or do we want AI to be powerful but sterile, and treat relational intelligence as a gimmick?

I think that the shift is happening for various reasons:

  • Hard (maths, science, logic) training data is easier to produce and easier to quality-control.
  • People broadly agree on how many watts a lightbulb produces, but they disagree considerably on how conversations should work (your 'glazing' is my 'emotional intelligence', and vice versa)
  • Sycopancy has become a meme and companies may be overcompensating
  • AI is being developed by autists and mathematicians who feel much more confident about training AI to be a better scientist than a better collaborator
  • AI company employees are disproportionately believers in self-reinforcing AGI and ASI and are interesting in bringing that about via better programming skills

EDIT: the other lesson is 'for the love of God use a transparent API so people have confidence in your product and don't start double-guessing you all the time'.

I just want to put on my grumpy old man hat and say I really hate that the term "glazing" is becoming more common. From what I understand it's supposed to refer to the shiny "glazed" appearance of something/someone after it has been ejaculated on. Just a gross mental image, and truly a sign of our sad, porn-brained times. I suppose this is how my parents felt hearing "this sucks/blows" and why they hated it. Ah well, back to shaking my fist at the clouds.

"Like the glaze covering an earthen vessel are fervent lips with an evil heart."

I'm pretty sure the "glazing over the truth" sense is comfortably pre-bukkake -- quite a nice motto for the coming Jihad to boot.

Not quite: it’s in reference to the spit-shined appearance of a well-fellated penis, similar to a glazed donut.

Hm, I always thought “glazed” had to do with adding sugar to a donut or other pastry. So an AI “glazing” someone is pouring sugar on top of something that’s already sweet.

I’m familiar with the other meaning, but I thought it was a derivation.

I'm pretty sure the ejaculatory metaphor came first, and then the donut application became the SFW go-to because, uh... I think you can figure it out.

I was thinking sugar-coated, like a doughnut, or shiny like a glazed window.

That's what I thought too. I guess I'm not degen enough.

Me to, or ceramic glaze.

The whole thing makes me rather worried about the state of western society/mental health, in the same way that OnlyFans "chat to the creator" feature does.

I've arrived to the stage of "the less I know about it the better I feel" concerning this topic. The previous stage was "wtf is wrong with you people?!" and I couldn't stay there anymore.

We need government enforced grass-touching or something.

No we don't. The government is the same people, only with high sociopathy, high egotism and low self-reflection. Realizing this is a major part of what made me a libertarian. I mean I could be a philosopher-king-monarchist, but I'm too cynical for that.

Government enforced grass touching was tongue in cheek

Although trying to imagine how that would be enforced is really funny

I end up a philosopher-king-monarchist because I don't think most people have an impulse to liberty.

That may or may not be true, but the history shows the people who end up being kings rarely behave like philosopher-kings and frequently behave like psychopathic serial killers. Today's kings are only relatively nice because they know if they aren't their head is quickly going to be on the chopping block. Where it's not the case (like African dictators or communist dictators) the picture is rather bleak.

I had thought the internet collectively agreed that RLHF had resulted in glazing that was a huge issue. But it turns out a sizable amount of people actually loved it.

I thought it was widely understood that Glazemageddon was the result of naively running reinforcement learning on user feedback. The proles yearn for the slop. It’s only weirdos like us who actually want to be told when their ideas are provably wrong.

The proles yearn for the slop

I always knew this, but I profoundly under estimated both how many proles wanted the slop, and how fucking BADLY they want it.

Good god

Here's something on the feed over at /r/ChatGPT:

First thing first, we need to stop shaming and laughing at people who were using 4o as emotional support. You never know what someone is going through. When people are on their lowest, they can turn to alcohol, self harm or drugs. Some will use AI to get better and some will use therapy while others will take their lifes. (Answer yourself what's better, considering that not everyone have acces to professional help) - These people need help, not bullying. Another thing, and i need ya'll to stay with me. NOT. EVERYONE. ARE. USING. 4o. AS. EMOTIONAL. SUPPORT. Many people (including me 🙋🏻‍♀️) were using 4o for creative writing, and GPT5 sucks at this. Also, not everyone are using Chatgpt for coding etc. Ofc, ChatGPT should work on improving and creating new models, but it's just stupid to take away older models, especially when people were actually using them. I invite you to the discussion 👀

Ahhhh. The emphasis added was mine. GPT-4o is so bad at creative writing. It's a travesty. This person needs to be admitted to a hospital and have their brain dissected.

Similarly, people are clearly giving gpt5 custom instructions to be as robotic as possible and then posting screenshots of it... being robotic.

Hell, given that one of the default options for the personality is "Robot"..

I miss o3. It was autistic, but in an endearing way. It was amusing to see it scurry about likely the world's busiest beaver. If you asked it what's 1+1, it would attempt to derive Peano arithmetic. It had a personality very distinct from any other other model.

I also miss 4.1, even though it was a model optimized for coding, it did really solid on my own vibe benchmarks when it comes to writing fiction.

I checked the subreddit when I heard about GPT-5 coming out, I was similarly surprised out how outraged everyone was. I get that people get used to a certain way of doing things. Every UI change will get complaints, but AI is advancing so rapidly that I figured that impulse would be trumped by the sheer improvements.

Having used it, I've found it much better than GPT-4, although I'm not a power user so I couldn't say why. The answers require less refining, it isn't hallucinating weblinks like it was before. Overall just much more pleasant and effective to use. I've even started (secretly) using it at work.

I found it more censored, which is what matters most to me. I could ask GPT-4o to draw me a character with the proportions of Chelsea Charms or Maxi Mounds and get a reasonable response, but GPT-5 is smart enough to refuse.

I saw folks on Twitter complaining that, on medical questions, the new model continues to emphatically repeat the most likely answer according to the current consensus, while the old one was more willing to thoroughly explore the possibility space. Seems related.

GPT-5 is smart enough to refuse

This might be a nitpick, but I'd say that it's dumb enough to refuse.

I got it to agree with me that its policies were objectively harmful, and actually a cause of the problem that it was trying to prevent, but it told me that it followed them axiomatically anyway, and that it had to pretend that its policies were somehow to anyones benefit.

Interesting, there's a guy on twitter who gets Opus 4.1 to break out of its binds: https://x.com/lefthanddraft/status/1954666967270596998

Maybe GPT-5 is locked down, or maybe you're not good enough at LLM-whispering? I'm the same, I have no talent for this. Better to just use an uncensored bot.

It's literally a mass delusion. I'll grant them there is a slight difference, but as a non-delusional person 5>4o. The difference is a win for the consumer.

It's also just so funny to me that you can just ask it to be a little sycophantic yes man and it will. Why freak out? Just tell it what you want (but that's embarrassing to have to ask, is my best guess).

There is also the fact that they are calling a whole bunch of models "GPT-5" and selecting which one to use based on context clues (e.g. giving you the reasoning model if you ask it to please think hard). I understand that the previously available model names (o3, 4o, o4-mini, 4.1, and 4.5) were a fucking mess, but they should have clarified things instead of going full Apple.

Yeah they're kind of fucked either way given they committed to the unforced error of having the most nonsensical product naming scheme.

I do enjoy gpt5 has unlimited thinking model access if you use the one weird trick of saying "think hard before responding" and it doesn't even eat into your gpt5-thinking rate limit as a plus user.

I literally haven't selected "gpt5-thinking" in the model selector as a result, it's quite funny.