This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
With the increased usage of ChatGPT and other aislop in everyday communication such as casual emails and slack messages, AI apologists have increasingly tried to excuse this usage by non-native English speakers(citation needed, but besides the point). The excuse being that for non-native speakers, AI usage can save time, or even increase the quality of the resulting writing. I want to argue this is actually the opposite, and that using AI output particularly and exceptionally corrosive when used by non-English speakers.
I came across this section(plaintext transcription in below comment) of a YT video, where an intermediate level English learner is trying to use ChatGPT improve a piece of writing, and also learn from it. (source video, not important). Here’s the catch ChatGPT’s output is just plain bad
Overall, my issues with ChatGPT for this use case can be broken down into three main problems:
Let’s go over the main revisions point by point
stunning -> absolutely mind-blowing - Stunning is already quite a strong adjective and ChatGPT is overdoing it. OK edit.
I commented -> I typed in the comments - Absolutely a bad edit. More wordy for no more meaning, and the original English is more true to the original Japanese.
Moreover -> Not only that - Moreover is perfect here. Bad edit.
Em dash - not called for here. AI tell.
reacted really disgusting me -> actually reacted - This seriously changes the meaning, taking away a major element of the storytelling. Bad edit.
I’m in a heaven right now -> I’m in heaven - I’m in heaven right now is emphasis. Bad edit.
It was a peaceful and amazing moment in my life -> That one moment was pure peace and bliss. Probably one of the best highlights of my life. - Deemphasized and wordified into two sentences. A better version would easily be “It was the most peaceful and amazing moment in my life”. Bad edit.
And also, the most excited thing is -> And the most exciting part is still ahead. - AI slop tell. Bad edit.
I could die there -> nothing - ChatGPT just took that out completely!!!! WFT!!!!
I really wanna support her live too. -> I really, truly want to support her with everything I’ve got. - “really, truly” came out of nowhere and the double emphasis with “with everything I’ve got” is odd. Bad edit.
Imagine that live I feel like drinking her bath water. -> Just thinking about that live … feels like I could drink her bathwater. - This one is totally lost. Basic context clues and cultural knowledge make it clear that the narrator already wants to drink gamer girl bathwater irregardless of any live. The correct edit would be “When I imagine that live, I feel like I’m drinking her bathwater” or “Imagining that live feels like drinking her bathwater.” The original English is closer to correct than ChatGPT and the correct meaning can be inferred.
Of course ChatGPT can probably be made to produce better outputs with better prompting, or used differently, but this is just one of many examples where ChatGPT usage by a casual user has actually made things worse.
Now what's the point of this post? First I would like to urge everyone not to use GenAI outputs in the final work, even for edits. Using AI as a judge is probably fine, but the best way to maintain quality is probably write all of the final text in your own words. Even for people without perfect English. Secondly, with all levels of society using or even abusing AI tools, it may increase productivity by some metrics, it will also be like an enshittification of all written communication.
We've seen an increasing number of complaints enter the discourse about foreign immigrants with weak English skills just being annoying to deal with in everyday life. And I've also had similar experiences, where dealing with a fresh off the boat foreigner has been an annoyance when ordering food or asking a simple question - and also where hiring an American would have only costed a tiny bit more. Well now AI slop is going to provide a double whammy - lazy or misguided native speakers are going to enshittify their own communication with slop, and also foreigners will have their English learning impeded, and the English they do write will be worse.
I (non-native English speaker) found ChatGPT's critique helpful with a recent application letter. I will grant you that it was a bit more formal than your choice of text, though -- I did not talk about drinking anyone's bathwater, time will tell if that was the correct choice or not.
Most of its suggestions were minor stylistic things (using a gerund instead of an infinitive in certain phrases, avoiding repetition of word constructs) which seemed to me to be improvements.
I will grant you that an application letter is probably a more central example of most of its training data than that perv diary entry -- it is a continuous text, for one thing. Also, unlike that diary entry, I did not start out with a (presumably well-formulated) draft in a foreign language which I translated to English and then asked GPT to correct my English without access to the original (which from what I can tell is what happened with the diary). Instead, I wrote me thoughts down in English, sometimes awkwardly, and relied on it to put them into a smoother form.
I guess I haven't seen the letter itself, but I think this is solidly covered by my point: By critiquing things that aren’t wrong, learners who follow blindly will lose their voice.
The final edited work may or may not be "objectively better" but it will certainly be more "chatgpt-ish."
I think that ChatGPT will suggest edits which will make a draft by a non-native speaker sound both more smooth and more ChatGPTish, in pretty much the same way as an American/British editor will suggest edits that will make the text sound more natural but also more American/British English.
Of course, if I were to prompt "please restate the content in the following paragraph in your own words", then I would get something which sounds a lot like ChatGPT.
While my active English is a bit limited, I think my passive English is rather good. I have read a lot of stuff written by native speakers and trust myself to generally pick between two proposed phrasings.
For what it is worth, ChatGPT did not try to introduce a single em dash for me. I took most of its stylistic suggestions (often along the lines of "use gerund here" or "you just used that phrase two sentences ago" or "oops, you forgot to finish this sentence") and rejected most suggestions around tone appropriateness.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have observed that South Asians like this excuse a lot because their own notion of English fluency and "high-class" writing is very similar to ChatGPTese: too many words, spicy metaphors, abuse of idioms, witticisms, hyperbolic imagery, casual winking at the reader, lots of assorted verbal flourish, "it's not X – it's Y" and other… practices impress and fascinate them; ChatGPT provides a royal road to the top, to the Brahmin league, becoming like Chamath or Balaji. Maybe they played a role in RLHF.
In my view, all prose of this kind, whether organic or synthetic, is insufferable redditslop. But at least human South Asians are usually trying to express some opinion, and an LLM pass over it detracts from whatever object-level precision it had.
This is part of the general problem with taste, which is sadly even less equally distributed between branches of humanity than cognitive ability.
P.S. No, this is not a specific dig at self_made_human, I mainly mean people I see on X and Substack, it's incredibly obvious. I am also not claiming to be a better writer; pompous South Asian redditslop is apparently liked well enough by American native speakers, whereas I'm just an unknown Ruskie, regularly accused of obscurantism and overly long sentences. I do have faith in the superiority of my own taste, but it's a futile thing to debate.
lol, I made this association before getting halfway through your first paragraph
More options
Context Copy link
I would like to see someone do some kind of analysis of whether writing style is genetic. How you would adjust for the confounder of culture, I have no idea.
Adoption studies.
I am pretty sure temperament is largely genetic, but that shouldn't translate into such a conspicuous stylistic pattern as you get from cultural environment.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What a sterling example of making the dream of perfection the sworn enemy of the merely better. As others have pointed out before, the most likely alternative, in the absence of ChatGPT, would have been this poor fellow resorting to Google Translate or other, far simpler ML solutions. There isn't an abundance of fluent English and Japanese speakers willing to proof read random YouTube comments.
I don't speak Japanese, but I see nothing particularly objectionable in the translation. It might not capture all nuance, but it gets the gist of it across. Learning language takes time, probably years, and by the time this gentleman gets good enough that he needs or appreciates the nuance, LLMs will be even better at the job.
This paper: https://arxiv.org/html/2504.18221v1 grades gpt-4 versus other translators with actual human grading (not bs like ROUGE which is useless) and finds that gpt-4 doesn't seriously outperform deepl, and google translate, while worse, isn't even that far off.
This test is actually also unfair in favor of ChatGPT, as since the test text is a story by a famous author, ChatGPT has likely already taken a peek at human translations of the work during training.
I'm reading the paper, but initial issues that caught my eye:
They're not evaluating GPT-4. They're using 4o. The exact implementation details of 4o are still opaque, it might be a distill of 4, but they're not the same model. As far as I can tell, that's a point of confusion on your part, not the authors.
4o, even at the time of publication, was not the best model available. Very far from it. It is a decent generalist model, but not SOTA. It isn't the best model, the second best model, or even the third... best model, on almost any metric one opts for.
I have, as far as I'm aware, never claimed that LLMs match or outperform professional human translators. My core contention was that even basic bitch LLMs are good enough, and an improvement over previous SOTA, including DeepL, which this paper supports.
This would hold true even if the authors had chosen to use something like o3, Opus 4, Gemini 2.5 Pro etc. It is unclear to me if they were aware that better options were unavailable, there's little reason to use 4o if one wants to know what the best possible output is.
And even if it is true, it doesn't matter. The models are consistently getting better. We have a new SOTA every few months these days.
4o vs gpt4 is my mistake, but gpt4 is generally considered obsolete and nobody uses it. It's true that 4o is a mixed bag and underperforms gpt-4 in some aspects, but we have no reason to believe that it's significantly worse than gpt-4 at translation.
4o is also what powers chatgpt.com so it's the model that most casual users will get the output from.
4o was released well before gemini 2.0 or claude 3.5, so it likely was the best model at the time, along with the original gpt-4. I agree that right now 4o is not good.My core contention is that deepl is good enough, as it's within spitting distance of chatgpt. But on the other hand ChatGPT has given people ways to do much much worse when they use it wrong.
The paper seems to have been published on April 2025.
Gemini 2.0 Pro and 3.7 Sonnet came out in February 2025. Claude 3.5 Sonnet came out in June 2024 and was better than the version of 4o out then.
At the very least, the authors should have made a note that they weren't using the SOTA, or that the SOTA would have moved significantly by the time of publication. To do less is mild dishonesty. This isn't 2022, the pace of progress is evident.
True, but that's OAI being cheap, and not an indictment of the utility of LLMs for translation. It's akin to claiming TVs suck, and then only using a cheap and cheerful $300 model from Walmart as the standard.
My criticisms stand, namely that LLMs only get better, they're "good enough", and that this is a net improvement over the status quo. It remains to be seen how much better the SOTA is over 4o or DeepL.
oh oops, I misread your comment, I thought you said that 4o was not sota when it was released. Yes it was obsolete when the paper came out.
Won't change the fact that people who use them wrong will still do worse than not using LLM at all.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I replied to your OP a few weeks ago expressing skepticism that using ChatGPT was actually improving your writing. This comment reinforces my skepticism. Yes, the ChatGPT output has fewer "errors" but it does a worse job of conveying the message than the user's own error-ridden text. Even from a purely stylistic standpoint, the ChatGPT output is worse. One of the hallmarks of bad English prose is using extra words to say nothing, and ChatGPT is guilty of this in virtually every sentence. It's not the perfect being the enemy of the good. The ChatGPT output is not good.
This is a very bad way to showcase ChatGPT's writing chops. When I use it, by virtue of personally having native fluency, I am far better equipped to critique its output. The original user, a Japanese person somewhat conversational in the language, is simply not in a position to be able to tell where it could have made better stylistic choices. Note that my objection above is to do with the fact that the availability of ChatGPT represents a massive improvement over the previous status-quo. If he had simply asked it to translate his original Japanese, it would have done a perfectly fine job.
LLMs are not a monolithic entity. When most people talk about ChatGPT, they're ignorant of the fact that there are options beyond the slop-tier 4o. You may or may not know the difference, but the average normie complaining doesn't. My own workflow involves using multiple different models, multiple attempts, prompts that I know qualitatively improve results and my own ability to discern between outputs.
For example, when I ask for a translation that preserves the writers' rather unique character:
Kimi K2:
Gemini 2.5 Pro:
I don't know basically any Japanese, and I'm pretty far from streaming culture -- but based on my knowledge of how these things work I'm pretty sure both are failing badly in their interpretation of 'support on live stream'; I'm pretty sure he means that he plans to, like, give her money? Probably not all of his money ("support her with everything I've got"), but certainly "cheering my lungs out" would be atypical behaviour on a livestream, no?
He probably means cheering. The Japanese message uses the loanword "live", which refers to special events like concerts, not regular live streams. You support concerts by showing up and cheering or clapping, and she's describing her birthday stream as a "horror live", so he's probably intending to watch the whole stream and spam emoji in chat whenever something noteworthy happens. This is pretty common behavior for concerts on YouTube.
More options
Context Copy link
Does he? Wouldn't surprise me, but I think we need
weebssubject matter experts to disambiguate on our behalf.More options
Context Copy link
More options
Context Copy link
I mentioned it in another comment but the typical workflow of intermediate speakers would be to write the work directly in the target language, rather than translating it. Machine translation with LLM is certainly pretty good right now but using it doesn't help anyone learn the language.
We certainly might be heading towards a world where everyone uses machine translation by default and nobody bothers learning a new language, but I'm certainly a luddite in that lane.
I don't agree. Better translation tools like deepl have been around for a while, and arxiv papers haven't shown that gpt series models seriously dominate dedicated translation models. But on the other hand ChatGPT is giving everyone a huge gun that people can shoot themselves with because it does things besides translate.
I would even argue that by virtue of using ChatGPT wrong, the user ended up with a worse result versus just using a shitty translation tool like Google translate.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
IDK if you specifically disagree, but I strongly prefer the original English, errors and all, over the ChatGPT output.
More options
Context Copy link
More options
Context Copy link
I work a lot with non-native English speakers and I think ChatGPT has definitely improved things, especially when they want to try to explain more complex aspects of the business, their funding needs, answer certain questions. I used to get a lot of emails so poorly written and barely comprehensible that it would take much longer to parse the meaning than if they had just sent the email in whatever native language and let me Google translate it. Mostly these are intelligent people, they’re just too proud to hire a translator (but not to use an LLM).
To me this is kind of a mass production of furniture or fast fashion thing. At the high end, the amount of genuinely enjoyable and well-produced writing will decline, not even because LLMs aren’t capable of it but because they will default to simple, emotive English in the new style, and because even good writers won’t be bothered in most cases to write themselves or to tweak prompts for better output.
But for the 99% of people who either don’t speak the language they’re ’writing in’ natively or don’t have good verbal ability, communication can be much easier, the gap between what is in their head and something someone else can read has, in my opinion, shrunk.
I would say the advantage of ChatGPT over a traditional translator is that you can interrogate it. For example, say you get an email from your boss you do not understand. You can ask it not only for a translation but also about subtext or tone, even to rephrase the translation in a way that preserves meaning. It seems to me that if you take advantage of this even 20% of the time, you come out ahead, because despite obvious model weaknesses and potential errors, direct translation has its own misunderstandings too (which seem worse).
Ditto for the composition side of things. You can do stuff like compose a foreign language email and then have it back-translate it to you as a way of double checking you said what you intended to say. Sure, AI might worsen the writing,
Alas, most humans lack this kind of imagination, but optimistically we can teach people how to get more out of their LLM usage.
All that said the original post as I understood it was more about using LLMs as a language learning tool, and I think there, they have a potential point. The biggest counterpoint also comes from interactivity: ever tried using the advanced voice mode? It's pretty neat, and allows verbal practice in a safe, no-judgement, infinite-time environment, which is quite literally the biggest obstacle to language learning 95% of people face! So if the AI sometimes misleads in correcting a passage, I think it's a worthwhile tradeoff for the extra practice time, considering how frequently language learners basically stop learning, or give up learning, at a certain point.
More options
Context Copy link
ChatGPT hacked their brains and convinced them that using machine translation is OK. Because before, their ego was too big to copy paste the output out of google translate, but somehow when it's ChatGPT it's totally ok.
Of course by giving up on writing English in the first place, they will also never learn.
More options
Context Copy link
More options
Context Copy link
Interestingly @RandomRanger cited a video in another thread that's an unintentional example of this. It's an Avatar compilation video titled "Hardest RDA Edit" where 'hard' is used to mean based/awesome/woah. My browser mistranslated that to "[Most Difficult] RDA Edit' i.e. 最も難しい RDA 編集.
If GPT is given both the title and the summary (which Youtube could do internally with their API) it gives the much better translations "Max strength RDA edit" 史上最強RDA編集 or "Most villainous RDA edit" 最凶RDA編集. In general I find GPT much better on language problems than they are on almost any other task, and miles better than standard machine translation.
The translation of "edit" as 編集 also feels iffy to me - I haven't seen that word used in the nominal sense for a product of editing, but only in the verbal sense for the act of editing. The term that JP net culture uses for these sorts of videos is MAD, or if you are okay with dropping any connotation that the clips were modified as opposed to just stuck together, you could stick with just 編 or even 編成動画 (compilation video).
"Hardest" feels borderline untranslatable, with its simultaneous connotation that the edit itself "goes hard" and that it makes the RDA look hard. It's actually serendipitous that 最強 works as well as it does - I couldn't think of anything that would work as well in German.
I like 最凶 better - you get the pun on さいきょう and also the slightly evil connotation.
Good to know, I thought the M stood for music and it was the same as an AMV.
Fair, if you want that connotation it's not a bad choice.
This line of discussion got me thinking a little about how strange a word edit is to begin with. Following the Latin roots, it really should just denote the act of releasing a text (e(x) + dare = give out), and there is plenty of semantic overlap between editor and publisher so that this connotation isn't gone, but somehow along the way it has acquired the overwhelming meaning of modifying something which I guess any sufficiently micromanagerial publisher has to do. Either way, in English this etymology is now pretty opaque, so edit is generally tied to this perception of some modification being involved - though there is also the expression "to edit [2+ things] together" which is rather in line with the thing below.
On the other hand, the suggested translation as 編集 (which indeed is the canonical JP name of the "Edit" UI element, as well as the job of a magazine editor) suggests a much more light-touch process of editing/publishing - it is made up from 編 "weave" (with both the literal meaning and a metaphorical interpretation as in "weaving a tale" going way back) and 集 "collect", suggesting an act of finding the right pieces and stringing them together skillfully, and this etymology is completely transparent. Meanwhile, there is an absolute overabundance of words that instead capture the modificatory aspects of editing, with subtle differences - 変更 (change+replace=modify), 調整 (tune+arrange=adjust, fix, tweak), 訂正 (correct+right=correct), 改変 (alter+change, with possible slightly negative connotations), 改訂 (alter+correct, with positive connotations)...
Part of this is that JP has a much more recent reification period, right? IIRC, the Meiji Era government basically had an explicit taskforce sitting down and deciding how to translate scientific, literary and other types of words into kanji in a way that was clear and useable. Thus 編集, 銀行、糖尿病、etc. as well as the creation of new pronouns such as かれ.
British English hasn't had such a process and the Americans were focused on other matters, so they're much more evolutionary. and the evolutionary process is what gives us the various masterpieces that @phailyoor has kindly provided for our, um, edification.
For 編集, at least, kotobank has citations from a 13th-century Zen Buddhist tractate ("Historians may 編集 this into an example of [some form of meditation]..."), and a 1656 translation that evidently uses the "compilation"-editing meaning ("Having resolved to do so in last year's spring, [I?] 編集 a 20-volume book called Shinpi Ketsudanshou").
edit: I also want to dispute the novelty of かれ as a pronoun. It's simply an older (perhaps regional? I have little intuition for what just fell out of prestige language use due to the west->east power shift) form of あれ that slots regularly into the this-that(close)-that(far)-which determiner pattern: これ・それ・あれ(かれ)・だれ, この・その・あの(かの)・どの, こなた・そなた・あなた(かなた)・どなた. If you have any exposure to period-drama or fake-oldtimesey speech, you might have heard かのもの with a very emphatically up-pitched か for "that person". It didn't take long to find an example of かれ being used as a personal pronoun all the way back in the Tale of Genji.
I bow to your superior knowledge. I was told that 彼 and 彼女 as gendered pronouns were an innovation to allow translating European works into grammatical Japanese, but perhaps it's not so or it was a minor twist on an established usage.
Eh, I think it is probably correct that かのじょ is an innovation! To begin with, it's an awkward mixed kun-on reading that just makes it look more pronominal over the natural かのおんな which is really just that woman, and there is no reason to believe かれ or あれ should originally be gendered - indeed, in the Genji quote it refers to a female character (Lady Kiritsubo), and in deliberately old-fashioned speech you still find lots of examples of あれ referring to females.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yeah I was using a Claude script to translate a fic from Russian. I can't read Russian so I can't really tell what I'm missing out on (also the author is not the most amazing wordsmith) but it was quite decent in context even where they were using words like 'necro-energy' that don't even exist in either language.
More options
Context Copy link
More options
Context Copy link
I do feel like it's insane how much content is now AI driven.
Even random innocuous social media blurbs have em-dashes when it's like 'You could have written that your restaurant is open for longer hours'. I understand using AI to marshall your thoughts or if you're wanting to do longer form writing but there's plenty of messages where I feel like it'd just be quicker and easier to not open ChatGPT and provide a prompt.
It should be illegal to ask ChatGPT to write something that would take you less than 2 minutes to write yourself. Especially if it's well within your abilities.
Unless it’s a review mandated by the HR. Then ChatGPT is the right tool as its native output is close to ridiculous HR corporate speak.
More options
Context Copy link
I find it especially confusing when the prompts to do some of these write-ups would be more writing and harder to craft than the actual statement itself.
More options
Context Copy link
I would love to know what other minuscule things you'd outlaw as dictator of earth.
Summary execution for not picking up your dog's shit, for not returning your shopping cart, and for dropping gum on the ground (or sticking it under a table, etc.). All of society's other problems would self-correct quickly after I implemented these rules.
You've got my vote. I hate when people don't pick up dog poop or return shopping carts. Gum would probably annoy me too, except I never see it out in the world.
More options
Context Copy link
Lee Kuan Yew isn't dead and he posts on the Motte
More options
Context Copy link
More options
Context Copy link
Black hat SEO would have a mandatory death penalty.
100% onboard with this
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
TFW she breaks up with you by text and you notice the em dashes
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This verb implies a movement from a good state to a bad one; the language was previously not shit. Except, the people using LLMs in this way already can't communicate. The original english translation you posted below is incomprehensible. You suggest
but I can't see how anyone would suggest the AI translation is worse than the original. It might screw up some of the meaning, but that comes with the tradeoff of being more readable.
Or are you just using this example to push your point that native speakers are going to degrade the quality of their communication? This seems far more to reinforce the argument that smart users of LLMs will use them to leap forward, while poor users will get left behind. As I write this post I am using the Grammarly add-on; it's a useful spelling and grammar checker. It will also pop up "writing improvements". Almost without exception, these improvements are shit, and they've been shit long before ChatGPT came along. However, it hasn't changed the way I write, because I am capable of judging the quality of its suggestions. Do you think that Grammarly has been degrading the quality of English for years because some users implement everything it says?
It's the same story with translation. 15 years ago, a non-native speaker might go to babelfish.com and pump out something completely useless. 10 years ago, they would have switched to Google translate, and got something better, but still missing a ton of meaning. 5 years ago, DeepL was the standard, but still a long way off human translation. Now it's LLMs. When learning any language, one of the first lessons a student learns is not to blindly trust any machine translation.
Eh, it was comprehensible enough, the most mistranslated part was "reacted really disgusting me" vs what I assume was meant to be "reacted really disgusted with me" - and the true meaning can be error-corrected from context. The AIsloppy editing destroyed more meaning, originality, dare I say soul than the lack of English skills of the author.
More options
Context Copy link
EDIT: Sorry, replied to wrong post.
More options
Context Copy link
More options
Context Copy link
Original English:
Original Japanese:
今日の配信は完璧でした!俺は”踏んでくだステップオンミー!”とコメントすると、推しのはあちゃまが「キモい!」と反応してくれて
ChatGPT output:
The original is at least readable. It suggests personality and context, the nature of which makes it funny. I can’t say that the ChatGPT examples in this thread have anything at all to make them worth reading. Like the best of what the bot produces, there’s sometimes this sense of airbrushed elegance, but never any sort of underlying texture.
More options
Context Copy link
Claude knocks it out of the park as far as I'm concerned.
Even 4o:
Thought #1: Incredible machine translation from Claude. 4o interpolates a little that's not in the actual text ("sexy kind of heaven") and does an iffy literal translation for "peaceful moment"; "blissful moment" is a better fit.
Thought #2: Ban LLMs. They will allow comments like this to be translated to English.
ルイズ!ルイズ!ルイズ!ルイズぅぅうううわぁああああああああああああああああああああああん!!! あぁああああ…ああ…あっあっー!あぁああああああ!!!ルイズルイズルイズぅううぁわぁああああ!!! あぁクンカクンカ!クンカクンカ!スーハースーハー!スーハースーハー!いい匂いだなぁ…くんくん んはぁっ!ルイズ・フランソワーズたんの桃色ブロンドの髪をクンカクンカしたいお!クンカクンカ!あぁあ!! 間違えた!モフモフしたいお!モフモフ!モフモフ!髪髪モフモフ!カリカリモフモフ…きゅんきゅんきゅい!! 小説10巻のルイズたんかわいかったよぅ!!あぁぁああ…あああ…あっあぁああああ!!ふぁぁあああんんっ!! アニメ2期決まって良かったねルイズたん!あぁあああああ!かわいい!ルイズたん!かわいい!あっああぁああ! コミック1巻も発売されて嬉し…いやぁああああああ!!!にゃああああああああん!!ぎゃああああああああ!! ぐあああああああああああ!!!コミックなんて現実じゃない!!!!あ…小説もアニメもよく考えたら… ル イ ズ ち ゃ ん は 現実 じ ゃ な い?にゃあああああああああああああん!!うぁああああああああああ!! そんなぁああああああ!!いやぁぁぁあああああああああ!!はぁああああああん!!ハルケギニアぁああああ!! この!ちきしょー!やめてやる!!現実なんかやめ…て…え!?見…てる?表紙絵のルイズちゃんが僕を見てる? 表紙絵のルイズちゃんが僕を見てるぞ!ルイズちゃんが僕を見てるぞ!挿絵のルイズちゃんが僕を見てるぞ!! アニメのルイズちゃんが僕に話しかけてるぞ!!!よかった…世の中まだまだ捨てたモンじゃないんだねっ! いやっほぉおおおおおおお!!!僕にはルイズちゃんがいる!!やったよケティ!!ひとりでできるもん!!! あ、コミックのルイズちゃああああああああああああああん!!いやぁあああああああああああああああ!!!! あっあんああっああんあアン様ぁあ!!セ、セイバー!!シャナぁああああああ!!!ヴィルヘルミナぁあああ!! ううっうぅうう!!俺の想いよルイズへ届け!!ハルケギニアのルイズへ届け!俺は実はサイト萌えなんだっ!!
絶望した!このサイトのコメントに絶望した!
あい らいく ゆー。
かむ おーばー とぅ まい はうす えんど ふぁっく まい しすたー!
Downvoted and AAQC’d.
You have raped my eyeballs and will be hearing from my lawyers shortly.
More options
Context Copy link
Shouldn't that be あんど
but I am sensing a whooshing sound going past my ears here.
In a surprising concession to real English phonology, no.
More options
Context Copy link
More options
Context Copy link
Boys, boys, let’s speak in English. It’s the law.
正しいFUCKの使い方
It's just... the world is so beautiful, you know?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes, surprisingly ChatGPT is better at translating wholesale than trying to revise a work or something. Asking it to translate directly actually gives a quite accurate result.
Unfortunately, as this situation shows, the user asked ChatGPT for revisions on the English work rather than translating the work from native language. I think this is a more typical use case because most people beyond beginners writing in English do not write in native language and then translate to English, but instead write directly in English. We are just fortunate enough to have the Japanese version as well, so we can see clearly where ChatGPT failed to capture the intention.
So we are in a world where "write English -> ask ChatGPT for revisions" is far far inferior to "write in native language (Japanese) -> use AI to translate to English" where translating directly gives a superior result. In this case the workflow that gives the best result involves the absolute least practice of English by the writer.
More options
Context Copy link
More options
Context Copy link
Wait, so what was the process there? Was ChatGPT given the Japanese text and asked to generate its own translation for comparison, or asked to improve/iterate on his? In general, I agree with your critique of non-native speakers using AI for text massaging (the feeling of something not quite coherent being said in superficially polished prose by an AI broadcast announcer voice with occasionally inappropriate diction is pretty grating), but in this particular case, it seems to me that the AI translation is in fact superior and somewhat more true to the original, which may be because unlike in the "Indians making slop for Tiktok/Youtube shorts" case, it had access to a literate source text. Specifically, for example, there is in fact nothing to the effect of "I could die there" in the JP text. The author must have spontaneously come up with it while writing his own proposed translation.
In general, the text we are looking at is close to a pessimal case for both AI translation and translation by someone who learned formulaic English at school, because the original is dense with subculture references and memes that are not just invoked as keywords but determine the syntax and narrative form as well. It's like trying to translate a 4chan greentext, or a doge image.
I think he must have tried to iterate on his original translation. The direct translation is more accurate:
Though I agree with @phailyoor that a lot of self-expression is lost here compared to his original attempted translation.
When you told me you're fluent in Japanese the other day, this was really not how I expected it to become relevant haha.
First thought: 'Oh, hey, I can understand this!'
Second thought: 'Oh, Christ, I can understand this.'
I'm not fluent yet but the point where I could watch degenelate videos like this and just understand them fine kind of ruined my life.
1/10 for political correctness
9/10 for humorous value
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Right, this translation gets closer to the original in some ways by not reproducing the additions and deletions in the original proposal, but also loses some of the colour. Notably, none of the three translations really quite reproduces the heroin-addled vibe of the original (this was perfect, I am in a state of absolute bliss, I took a dose, and then I got another dose!! and soon I'll get yet another dose, I can't wait!!). I wonder if this sort of pathology has been thoroughly RLHFed out of ChatGPT, or one could elicit it with the right prompt.
(The "sexy heaven" thing in yours came from a typo @phailyoor introduced - it's 天国にいる気分 on paper, not the enigmatic 天国に色気分 for which that interpretation would be a fair guess.)
oops I should have double checked for typos.
More options
Context Copy link
@4bpp sorry for double-dipping, but since I've got you here do you know why わ is used? Obviously it's usually feminine, and I understand that the male usage is from the archaic patterns where it's broadly an emphasiser like ぞ and therefore used by archaic / cool characters to express emphasis. Is that what's going on here? It doesn't quite seem to fit.
As a Kansai resident I will say it does not code feminine here, though I can't speak for all of Japan obviously. Men routinely use わ in Kansai dialect, which is in general considered a rougher way of speaking than Kanto ben.
Hmm, thanks. I've seen it come up a number of times from e.g. light novel protagonists, who I would not say are Kansai or cool. @4bpp may be right and it's just memetics that I'm overthinking.
More options
Context Copy link
More options
Context Copy link
I don't think it codes as overwhelmingly feminine in the way, say, using あたし as a first-person pronoun would, but written out like that it gives the whole phrase a somewhat more pretend/role-playing vibe, so if I really wanted to dig into it I would check if it's an imitation of the speech patterns of the vtuber the author is simping for, or has some other pop-cultural weight behind it. Either way, I don't think this is particularly worth overthinking - people have working mirror neurons, and someone using "y'all" in English or simplifying pronouncing -ing as -in would also not warrant a deep investigation of the implications and whether they have Southern or African-American roots (as opposed to, as per my theory, imitating something they have heard elsewhere).
More options
Context Copy link
More options
Context Copy link
I didn't spot that tbh. After a decade I still can't quite get all the nuances of how に should be used, especially when it's used as part of more sophisticated/niche grammar structures. N1 is still a little ways off...
I do notice that none of the translations got the nuance of 「キモい!」と反応してくれて right.
The use of くれて to imply this was a sort of mutually positive interaction changes the entire tone of the passage, so it's kind of bad GPT misses it. Though I feel like I'm putting far too much thought into the ramblings of a perv on the internet.
Like with your parallel post, I think this is reading too much into a detail. Japanese all but requires having a social directionality suffix when talking about actions done between or on behalf of other people in any remotely polite speech, so just writing ...と反応した, と言った would feel incongruously rude especially in the context of someone gushing about his vtuber idol. To translate it explicitly is to take an unremarkable piece of information that is conveyed by default expectation and elevate it as remarkable - it's as if a Japanese, or English, translator took a German text, where, after the German norm, all occupations must be marked for gender (der Fahrer (the male driver)/die Fahrerin (the female driver) etc.), and took care to translate the markers, turning the neutral "die Busfahrerin hatte einen Unfall" into the potentially sexist "the woman bus driver had an accident". (This would be even worse if you were translating to e.g. Chinese, where not even 3rd-person pronouns are gendered in speech - imagine every he/she turning into a they with an explicit mention of the person's gender!)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link