This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
With the increased usage of ChatGPT and other aislop in everyday communication such as casual emails and slack messages, AI apologists have increasingly tried to excuse this usage by non-native English speakers(citation needed, but besides the point). The excuse being that for non-native speakers, AI usage can save time, or even increase the quality of the resulting writing. I want to argue this is actually the opposite, and that using AI output particularly and exceptionally corrosive when used by non-English speakers.
I came across this section(plaintext transcription in below comment) of a YT video, where an intermediate level English learner is trying to use ChatGPT improve a piece of writing, and also learn from it. (source video, not important). Here’s the catch ChatGPT’s output is just plain bad
Overall, my issues with ChatGPT for this use case can be broken down into three main problems:
Let’s go over the main revisions point by point
stunning -> absolutely mind-blowing - Stunning is already quite a strong adjective and ChatGPT is overdoing it. OK edit.
I commented -> I typed in the comments - Absolutely a bad edit. More wordy for no more meaning, and the original English is more true to the original Japanese.
Moreover -> Not only that - Moreover is perfect here. Bad edit.
Em dash - not called for here. AI tell.
reacted really disgusting me -> actually reacted - This seriously changes the meaning, taking away a major element of the storytelling. Bad edit.
I’m in a heaven right now -> I’m in heaven - I’m in heaven right now is emphasis. Bad edit.
It was a peaceful and amazing moment in my life -> That one moment was pure peace and bliss. Probably one of the best highlights of my life. - Deemphasized and wordified into two sentences. A better version would easily be “It was the most peaceful and amazing moment in my life”. Bad edit.
And also, the most excited thing is -> And the most exciting part is still ahead. - AI slop tell. Bad edit.
I could die there -> nothing - ChatGPT just took that out completely!!!! WFT!!!!
I really wanna support her live too. -> I really, truly want to support her with everything I’ve got. - “really, truly” came out of nowhere and the double emphasis with “with everything I’ve got” is odd. Bad edit.
Imagine that live I feel like drinking her bath water. -> Just thinking about that live … feels like I could drink her bathwater. - This one is totally lost. Basic context clues and cultural knowledge make it clear that the narrator already wants to drink gamer girl bathwater irregardless of any live. The correct edit would be “When I imagine that live, I feel like I’m drinking her bathwater” or “Imagining that live feels like drinking her bathwater.” The original English is closer to correct than ChatGPT and the correct meaning can be inferred.
Of course ChatGPT can probably be made to produce better outputs with better prompting, or used differently, but this is just one of many examples where ChatGPT usage by a casual user has actually made things worse.
Now what's the point of this post? First I would like to urge everyone not to use GenAI outputs in the final work, even for edits. Using AI as a judge is probably fine, but the best way to maintain quality is probably write all of the final text in your own words. Even for people without perfect English. Secondly, with all levels of society using or even abusing AI tools, it may increase productivity by some metrics, it will also be like an enshittification of all written communication.
We've seen an increasing number of complaints enter the discourse about foreign immigrants with weak English skills just being annoying to deal with in everyday life. And I've also had similar experiences, where dealing with a fresh off the boat foreigner has been an annoyance when ordering food or asking a simple question - and also where hiring an American would have only costed a tiny bit more. Well now AI slop is going to provide a double whammy - lazy or misguided native speakers are going to enshittify their own communication with slop, and also foreigners will have their English learning impeded, and the English they do write will be worse.
Original English:
Original Japanese:
今日の配信は完璧でした!俺は”踏んでくだステップオンミー!”とコメントすると、推しのはあちゃまが「キモい!」と反応してくれて
ChatGPT output:
Wait, so what was the process there? Was ChatGPT given the Japanese text and asked to generate its own translation for comparison, or asked to improve/iterate on his? In general, I agree with your critique of non-native speakers using AI for text massaging (the feeling of something not quite coherent being said in superficially polished prose by an AI broadcast announcer voice with occasionally inappropriate diction is pretty grating), but in this particular case, it seems to me that the AI translation is in fact superior and somewhat more true to the original, which may be because unlike in the "Indians making slop for Tiktok/Youtube shorts" case, it had access to a literate source text. Specifically, for example, there is in fact nothing to the effect of "I could die there" in the JP text. The author must have spontaneously come up with it while writing his own proposed translation.
In general, the text we are looking at is close to a pessimal case for both AI translation and translation by someone who learned formulaic English at school, because the original is dense with subculture references and memes that are not just invoked as keywords but determine the syntax and narrative form as well. It's like trying to translate a 4chan greentext, or a doge image.
I think he must have tried to iterate on his original translation. The direct translation is more accurate:
Though I agree with @phailyoor that a lot of self-expression is lost here compared to his original attempted translation.
Right, this translation gets closer to the original in some ways by not reproducing the additions and deletions in the original proposal, but also loses some of the colour. Notably, none of the three translations really quite reproduces the heroin-addled vibe of the original (this was perfect, I am in a state of absolute bliss, I took a dose, and then I got another dose!! and soon I'll get yet another dose, I can't wait!!). I wonder if this sort of pathology has been thoroughly RLHFed out of ChatGPT, or one could elicit it with the right prompt.
(The "sexy heaven" thing in yours came from a typo @phailyoor introduced - it's 天国にいる気分 on paper, not the enigmatic 天国に色気分 for which that interpretation would be a fair guess.)
I didn't spot that tbh. After a decade I still can't quite get all the nuances of how に should be used, especially when it's used as part of more sophisticated/niche grammar structures. N1 is still a little ways off...
I do notice that none of the translations got the nuance of 「キモい!」と反応してくれて right.
The use of くれて to imply this was a sort of mutually positive interaction changes the entire tone of the passage, so it's kind of bad GPT misses it. Though I feel like I'm putting far too much thought into the ramblings of a perv on the internet.
Like with your parallel post, I think this is reading too much into a detail. Japanese all but requires having a social directionality suffix when talking about actions done between or on behalf of other people in any remotely polite speech, so just writing ...と反応した, と言った would feel incongruously rude especially in the context of someone gushing about his vtuber idol. To translate it explicitly is to take an unremarkable piece of information that is conveyed by default expectation and elevate it as remarkable - it's as if a Japanese, or English, translator took a German text, where, after the German norm, all occupations must be marked for gender (der Fahrer (the male driver)/die Fahrerin (the female driver) etc.), and took care to translate the markers, turning the neutral "die Busfahrerin hatte einen Unfall" into the potentially sexist "the woman bus driver had an accident". (This would be even worse if you were translating to e.g. Chinese, where not even 3rd-person pronouns are gendered in speech - imagine every he/she turning into a they with an explicit mention of the person's gender!)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link