site banner

Transnational Thursday for November 27, 2025

Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.

1
Jump in the discussion.

No email address required.

Irrelevant possibly but I was given a stack of essays in Japanese recently, some of them handwritten, that I was supposed to be able to understand. I asked ChatGPT 5.1 to translate one based on an image (it was handwritten) , which it did. In reading the translation I was startled by what appeared to be an alarming misapprehension of the person who wrote the essay. I asked ChatGPT about the word choices used and where the writer was going, as well as several follow-up questions about it, and the LLM agreed with my assessment that the person was grossly uninformed.

I then decided to look myself at the essay (which I should have done to begin with but had been lazy.)

Nothing in the translation was in the essay.

On confrontation ChatGPT crumbled, apologizing, saying "because the text was hard to read" it simply pattern-matched the writing with similar writings it had been exposed to and extrapolated the entire remainder from that. In other words a massive hallucination.

I haven't had anything this wrong from ChatGPT in a while and it definitely gives me pause when it comes to requesting translations without scrupulously double-checking it.

The most annoying thing? It then requested I reupload it, promising to "really be careful and be sure only to translate accurately" and....it then gave me what passed for a much more accurate translation. Why this didn't happen to begin with was a mystery, but asking it that just got the usual "You're right to be annoyed" type groveling.

edited for typos