site banner

Small-Scale Question Sunday for August 10, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

1
Jump in the discussion.

No email address required.

I haven't really used 5 yet so don't have an opinion. But broadly I agree with this Reddit post that AI soft skills are being steadily downgraded in favour of easily-benchmarkeable and sellable coding and mathematics skills.

When I was using 4o something interesting happened. I found myself having conversations that helped me unpack decisions and override my unhelpful thought patterns and things like reflecting on how I’d been operating under pressure. And I’m not talking about emotional venting I mean it was actual strategic self-reflection that actually improved how I was thinking. I had prompted 4o to be my strategic co-partner, objective, insight driven and systems thinking - for me (both at work and personal life) and it really delivered.

And it wasn’t because 4o was “friendly.” It was because it was contextually intelligent. It could track how I think. It remembered tone recurring ideas, and patterns over time. It built continuity into what I was discussing and asking. It felt less like a chatbot and more like a second brain that actually got how I work and that could co-strategise with me.

Then I tried 5. Yeah it might be stronger on benchmarks but it was colder and more detached and didn’t hold context across interactions in a meaningful way. It felt like a very capable but bland assistant with a scripted personality. Which is fine for dry short tasks but not fine for real thinking. The type I want to do both in my work (complex policy systems) and personally, to work on things I can improve for myself.

That’s why this debate feels so frustrating to watch. People keep mocking anyone who liked 4o as being needy or lonely or having “parasocial” issues. When the actual truth is lot of people just think better when the tool they’re using reflects their actual thought process. That’s what 4o did so well.

The bigger picture thing I think that keeps getting missed is that this isn’t just about personal preference. It’s literally about a philosophical fork in the road

Do we want AI to evolve in a way that’s emotionally intelligent and context-aware and able to think with us?

Or do we want AI to be powerful but sterile, and treat relational intelligence as a gimmick?

I think that the shift is happening for various reasons:

  • Hard (maths, science, logic) training data is easier to produce and easier to quality-control.
  • People broadly agree on how many watts a lightbulb produces, but they disagree considerably on how conversations should work (your 'glazing' is my 'emotional intelligence', and vice versa)
  • Sycopancy has become a meme and companies may be overcompensating
  • AI is being developed by autists and mathematicians who feel much more confident about training AI to be a better scientist than a better collaborator
  • AI company employees are disproportionately believers in self-reinforcing AGI and ASI and are interesting in bringing that about via better programming skills

EDIT: the other lesson is 'for the love of God use a transparent API so people have confidence in your product and don't start double-guessing you all the time'.