@marinuso's banner p

marinuso


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 12:42:16 UTC

				

User ID: 850

marinuso


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 12:42:16 UTC

					

No bio...


					

User ID: 850

On a discussion forum in particular, you care that there's an actual person behind the post, who actually holds whichever view they communicated, and who can respond to follow-up questions. That's what discussion fundamentally is. Ideally, of course, it shouldn't matter how that person edits his posts, but it does matter that the posts are his in a real sense.

Even before AI, we cared when this wasn't the case. People would pretend to hold views they didn't, or be people they weren't, in order to rile people up, and we'd call them trolls and they'd get banned. Note that even in that case there is a gray area. If someone's not too bad of a troll, and his posts are good enough discussion fodder, he might be tolerated for a while, even though people know he's a troll.

But being a troll by hand takes effort, and that limits the amount of trolling. Meanwhile, LLMs have caused a flood of "content". Marketeers, advertisers, Onlyfans girls, influencers and the like often euphemistically(?) refer to their output as "content" and to the thing they do at their jobs as "creating content". The problem with LLMs is that it's become much too easy to create "content" in this sense.

If you want to be a troll nowadays, you just turn on your LLM and let it flood the place.

If you have a working LLM detector, or even something close enough to it, I can understand a rule that says "whatever it flags, is banned". Yes, it's possible to use LLMs with good intentions and/or with good results. And you may even apply leniency in such cases even when it's obvious someone's using an LLM. But its main effect is to drastically simplify the job of a bad actor.