Tbf to Amadan, the use of 'generative AI' as a description of use case rather than of design is a pretty common one from anti-AI artist and writers.
Hm, I was not aware of that. I'd thought most of such people at least ostensibly maintained a principled objection against generative AI for its training methods, rather than one based on pure protectionism.
That's fair, perhaps this "mania," as you call it, might be the immovable object that matches up to the irresistible force of wokeness. I just think that, sans a definitive proof, any denial of LLM-usage from an author that is deemed as sufficiently oppressed would be accepted at face value, with any level of skepticism deemed as Nazi-adjacent and appropriately purged.
Now I'm imagining a scandal where someone publishes a sort of postmodern scifi novel that they claim to be the unedited ChatGPT log where they had it write a novel piece by piece, publishing all the prompts they input between segments and all, but it comes out that, actually, the author fraudulently crafted the novel, writing each and every word the old fashioned way like a novelist in the pre-LLM era. Bonus points if it was written by hand, as revealed by a notebook with the author's handwriting showing the rough drafts.
Bonus bonus points if it's then revealed later on that the handwritten manuscript was actually created by an advanced 3D printer working off a generative AI based on a prompt written by the author.
I see a couple of issues with that scenario.
One is that there will almost always be plausible deniability with respect to LLM usage. There would have to be a slip-up of the sort of including meta-text that chatbot-style LLMs provide - something like "Certainly! Here is the next page of the story, where XYZ happens." - for it to be definitive proof, and I'd expect that the audience and judges would pick up on that early enough to prevent such authors from becoming high status. That said, it could still get through, and also someone who did a good enough job hiding this early on could slip up later in her career, casting doubt on her original works.
But the second, bigger issue, is that even if this were definitively proven, with the author herself outright claiming that she typed in a one-word prompt into ChatGPT 10 to produce all 70,000 words of her latest award-winning novel, this could just be justified by the publishing industry and the associated awards on the basis of her lacking the privilege that white/straight/cis/male authors have, and this LLM usage merely ensures equity by giving her and other oppressed minorities the writing ability that privileged people are just granted due to their position in this white supremacist patriarchal society. Now, you might think that this would simply discredit these organizations in the eyes of the audience, and it certainly will for some, but I doubt that it would be some inflection point or straw that breaks the camel's back. I'd predict that, for the vast majority who are already bought in, this next spoonful would be easy to swallow.
This generalized antipathy has basically been extended to any use of AI at all, so even though the WorldCon committee is insisting there has been no use of generative AI
(Emphasis added).
If they admit to using ChatGPT, how can they claim they didn't use generative AI? ChatGPT and all LLMs are a type of generative AI, i.e. they generate strings of text. ChatGPT, I believe, is also trained on copyright-protected works without permission from the copyright holders, which is the criterion many people who hate AI consider to qualify as the generative AI "stealing" from authors and artists.
Just based on this description, it sounds like these WorldCon people are trying to thread a needle that can't be. They should probably just say, "Yes, we used generative AI to make our lives easier. Yes it was trained on copyright protected works without permission. No, we don't think that's 'stealing.' Yes, this technology might replace authors like you in the future, and we are helping to normalize its usage. If you don't like it, go start your own AIFreeWorldCon with blackjack and hookers."
I'm a Catholic, and not a particular fan of Trump, and I found the picture both inevitable and mildly amusing.
Only mildly amusing? I found it holy amusing!
A part of this that hadn't occurred to me until I saw it pointed out is that there seems to be a sort of donation duel between this lady's case and that of Karmelo Anthony, who's a black teen charged with murdering a white teen during a track meet by stabbing him in the heart during a dispute over seating. I think there was a top-level comment here about this incident before, but there was a substantial amount of support on social media for Anthony on racial grounds, including fundraising for his defense. I get the feeling that a lot of the motivation to donate to this lady is by people who feel that the support Anthony has been getting on racial grounds has been unjust, and supporting her is a way of "balancing the scales," as it were. This isn't the instantiation of "if you tell everyone to focus on everyone's race all the time in every interaction, eventually white people will try to play the same game everyone else is encouraged to" that I foresaw, but it sure is a hilarious one.
Now, one conspiracy theory that I hope is hilariously true, is that the guy who recorded this lady was in cahoots with the lady herself and staged the whole thing in order to cash in on the simmering outrage over the Anthony case. But I doubt that anyone involved has the foresight to play that level of 4D chess.
I don't think either are particularly moral, and it's a cultural battle to be waged against both. I don't think we'll ever convince fellow humans to stop lying to manipulate people, but I can at least imagine a world where we universally condemn media companies who publish AI slop.
So I do think there's a big weakness with LLMs in that we don't quite have a handle on how to robustly or predictably reduce hallucinations like we can with human hallucinations and fabrications. But that's where I think the incentive of the editors/publishers come into play. Outlets that publish falsities by their human journalists lose credibility and can also lose lawsuits, which provide incentives for the people in charge to check the letters their human journalists generate before publishing them, and I see similar controls as being effective for LLMs.
Now, examples like Rolling Stone's A Rape on Campus article show that this control system isn't perfect, particularly when the incentives for the publishers, the journalists, and the target audience are all aligned with respect to pushing a certain narrative rather than conveying truth. I don't think AI text generators exacerbate that, though.
I also don't think it's possible for us to enter a world where we universally condemn media companies who publish AI slop, though, unless "slop" here refers specifically to lies or the like. Given how tolerant audiences are of human-made slop and how much cheaper AI slop is compared to that, I just don't see there being enough political or ideological will to make such condemnation even a majority, much less universal.
Personally: AI-hallucinated quotes are worse than fabricated quotes, because the former masquerades as journalism whereas the latter is just easily-falsifiable propaganda.
AI-hallucinated quotes seem likely to be exactly as easy as falsifiable as human-fabricated quotes, and easily-falsifiable propaganda seems to be an example of something masquerading as journalism. These just seem like describing different aspects of the same thing.
Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?
I'm not Dean, but I would agree with this. I didn't have a meaningful opinion on Yahoo Entertainment, but, assuming that that article was indeed entirely AI-generated, the fact that it was produced that way wouldn't reflect negatively or positively on them, by my view. Publishing a falsehood does reflect negatively, though. As for Silver (is it not Nate?), I don't expect pundits to fact-check every part of an article before linking it, especially a part unrelated to the point he was making, and so him overlooking the false quote doesn't really surprise me. Though, perhaps, the fact that he chose to link a Yahoo Entertainment article instead of an article from a more reputable source reflects poorly on his judgment; this wouldn't change even if Yahoo Entertainment hadn't used AI and the reputable outlet had.
Actions speak louder than words. The fact they forcibly butted him aside due to the age concerns should be enough proof.
All that is proof of is that they believed that Biden, given the emperor-has-no-clothes moment at the debate, was less likely to garner more electoral votes against Trump than an alternative. The action of taking your hand out of the cookie jar after you're caught with your hand in it isn't proof of any sort of owning up to screwing up by trying to steal the cookies in the first place.
I agree, though, that actions do speak louder than words. If all the White House staff and journalists that ran cover for Biden's infirmity had actively pointed spotlights at the past words and articles that they had stated and published that had misled people, followed by resigning and swearing never to pursue politics or journalism again, those actions would be proof enough in my view. Actions that don't go quite as far could also serve as proof, depending on the specifics, but it would have to be in that ballpark.
If you believe I broke a rule, I encourage you to report me.
Those rules are so vague they can apply to anyone. And when you‘re facing a hostile community, they apply to you.
I don't think those rules are that vague, except by stretching what "vague" means to such an extent that all rules everywhere can be declared "so vague they can apply to anyone." If you don't think that his comments were pretty obviously unkind and failing to make reasonably clear and plain points, on top of making extreme claims without proactively providing evidence, then I don't take your judgment seriously.
The ‚they‘re obviously not interested in debate‘ talking point is an absurd, but very common justification for censoriousness.
I don't care if he was or wasn't interested in debate. What matters is that he was posting text that wasn't conducive to, and actually quite deleterious to, debate.
Well, if he‘s really not interested in debate, let him leave, don‘t ban him(or threaten to ban him). Call it keeping the moral high ground
I don't see how not enforcing against blatant rule violations is keeping the moral high ground. The rules are right there on the right sidebar, and he refused to follow the ones around things like speaking clearly or being no more obnoxious than necessary or proactively providing evidence, despite being given ample opportunity to do so. Letting the forum be polluted with the type of content that the forum was specifically set up to prevent seems to be immoral, if anything, in making the forum worse for the rest of the users who use this forum because of the types of discussion that is fostered by those rules being enforced (though I'd argue that there's no real moral dimension to it regardless). I don't know if Millard is a reasonable person, but he certainly did not post reasonable comments and, more importantly, posted comments that broke the forum's rules in a pretty central way.
- Prev
- Next
The steelman would probably be that they've transitioned from one gender to no gender, rather than transitioning from one gender to another gender.
The true reason is probably that logic is an oppressive cis-heteropatriarchal construct, and this person ended up genuinely feeling like they're whatever identities were most useful and convenient for them in this context, which in this case happened to be both agender and trans.
More options
Context Copy link