site banner

Small-Scale Question Sunday for May 4, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Has everyone else been seeing this kind of cadence, short sentences and contrasting statements? I keep seeing this and thinking AI. How do others see it? Do you think it's suddenly become more prominent too?

The plan was smart:

•ISVs to move rifle squads quickly

•LRVs to give Cavalry squadrons mobility and sensors

•M10 Bookers to restore firepower to the dismounted fight. It wasn’t perfect, but it made the IBCT relevant again.

Now the Army has canceled the M10. The LRV is nowhere in sight. And what’s left? An “MBCT” concept with no protected firepower, no recon platform, and a few light vehicles. This isn’t transformation. It’s disarmament.

The M10 solved a real problem. So did the LRV. Killing the platforms without replacing the capability isn’t reform. It’s regression.

It screams AI to me. I've used most models enough on writing, both fictional and not, to know some of their hallmarks. The bit you've highlighted makes me groan a little every time I catch it in the wild.

It's not that there aren't real humans who write that way, but these days, my money is on an LLM.

I rarely use ChatGPT compared to Claude or Deepseek so I can't recognize it that well, it felt a little Deepseek to me but then Deepseek is a fair bit like ChatGPT and one hardly expects US military commentators to use Deepseek. Deepseek gives me stuff like:

Silence.

Then—pandemonium.

Or:

The road to the Black Tower was long.

And her vengeance?

Beginning.

Slop!

I primarily use Gemini, but in my experience it's endemic to all LLMs (except maybe Claude, but I hardly use that these days). It's not as glaring as em-dashes, but I still notice.

Praying Altman releases that fine-tuned model designed for creative writing someday. The demand is clearly there.

Related to this, I've noticed that various webnovel authors seem to be using the same LLM for describing high society activities and products that they likely have little to no personal experience with.

Well, its either that or they're doing a remarkable job at copying each others style but only in this particular area...

Way back in the GPT 3.5 days, I used ChatGPT to translate my plain text of a character's dialogue to Jamaican patois.

I was a bit embarrassed when, about a year later, an actual Jamaican reader read my novel and left a comment exclaiming how surprised he was regarding the authenticity of the Jamaican slang used. Far too rare on the internet, and almost never so well, he told me, and asked if I asked a Jamaican speaker. I told him that I'd done my "research", which was half-true. Well, I guess it worked as intended.

Here it's more uncanny valley stuff imo. The correct words are used but it comes off as a status insecure under/working class person studying up before going to the best restaurant in town, trying to impress their boss/date/family with words they learned an hour ago. Its barely a step up from ordering "your most expensive wine".

This is of course fine when that is what's literally happening in the story but it rarely if ever is.