site banner

Small-Scale Question Sunday for October 29, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Moral dilemma or obvious thing to do?

Hey Mottizens, lend me your ears, and your voices. I will keep this brief, but perhaps you can give your opinion (and tell me why). It's not Sunday anymore but maybe someone will read this.

I have recently submitted a book chapter for publication in what is to be an anthologized set of essays. Never you mind where or what, but this is an internationally recognized publishing house.

In an odd turn, after submission I received a paper of another author (to be in the same book, presumably) from the publisher to proof and review. Which is fine. I have no problem doing that.

I noticed there were a lot of non-smart quotes in the text. Some quotes were formatted properly, many weren't. This often happens when people paste material into a document with data that originated/was typed in another program (or on the internet). You see where I'm going with this, perhaps.

I decided to run the abstract through a ChatGPT detector. It was flagged as 51% chance written by AI. I ran the first paragraphs, and got the same result. It coded highest on "average sentence length" where the sentences did not vary in the same way a human's might.

I then ran my own first page, just as a counterfactual. My abstract alone also showed as 20% chance written by AI. But the first paragraphs showed 0% chance of AI authorship.

I don't think these systems are all that reliable, but it gave me pause. My question is should I:

1. ignore all of this, mention the smartquotes should be reformatted, revise as usual.

2. revise as usual, email the editors the above information.

3. stop revising, email the editors the above information.

4. other

I am leaning towards 1 simply because I am not convinced the AI detector is all that accurate, and also the author is not a native-speaker of English (though is pretty damn good). Maybe the author put it into Chat GPT and said "Make this sound academic" or something. And at the end of the day I am not sure how serious "generate by AI" is, whether it suggests a kind of academic fraud or is simply a tool put to use. It isn't clear.

What say you?

Note: This post was human-generated.

Okay, why are the publishers asking you to review someone else's paper? Are you an editor working for them? Will they pay you for this?

Because if you're just a contributor, why the heck are they outsourcing their editing work to you?

I think the best way to cover yourself is to send that back right off and say you're not their employee (maybe word that more tactfully). Whatever about suspicions the other person may have used AI to generate their essay, it's got nothing to do with you unless you do work for that publisher and are their employee.

It seems extremely unprofessional because it's setting you up (and whatever others they're pulling this same thing with) for an accusation of "you read my essay pre-publication and plagiarised it!", never mind the messiness around alleged AI use. They're putting you at risk of a lawsuit or, at the very least, having your reputation trashed online.

This is not your job. Maybe they're trying to double-check for AI use and they're sending essays received to everyone to be evaluated, but again - this is not your job unless you are formally employed and paid by them. You're not doing free work for them, and you're certainly not being covered by them from accusations by the disgruntled who find out you read their essay and told the publisher it was all done by chatbot.