The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:
-
Requests for advice and / or encouragement. On basically any topic and for any scale of problem.
-
Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.
-
Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.
-
Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

Jump in the discussion.
No email address required.
Notes -
Since I got tagged in here:
I have no real opinion on your post that set this off, and believe you that you didn't use AI to write it.
I do think your style has gotten worse since you became such an AI enthusiast, mostly in that it is more wordy and "tryhard." I do strongly suspect it's the LLM influence, which you think is a good thing because they "write well," but the thing is, mostly they don't. They write very fluently. They can fill space with words, words that sometimes sound lyrical or profound, but... it's empty.
It's hard to describe without going into a much longer post about writing style, but it's very similar to the debate over AI art. I am not an AI art hater, I think occasionally it can produce really cool stuff, but mostly it's good for outputting placeholder art that gets the job done... you know, D&D characters or "I have a picture in my head that would be cool to see rendered but I don't want to pay someone to draw it" or bland corporate stuff. Notwithstanding Scott's AI Art Turing test, most AI art is very, very recognizeable as AI. You know the look: a little too polished, a little too saturated, a little too uniform in tone and shade and crosshatching (even when prompter tries to make it draw in different styles), an emptiness in the eyes... the illustration might be perfect in form, we don't see six-fingered hands or necklaces that meld with shoulders as much anymore, but it's still full of tiny details and stylistic choices that a human artist wouldn't make. And it's all very samey, like imagine every single artist in the world graduating from CalArts and trying as hard as possible to replicate the CalArts style.
AI writing is the same!
It's not just the tells (em-dashes, "It's not X, it's Y"), which like six-fingered hands and necklaces melding into shoulders, LLMs are starting to be trained not to do so predictably. It's the sameness, the pseudo-profound verbosity, the fluency that mistakes many pretty-sounding-words chained together in grammatically correct sentences as saying something prettily.
You are starting to write like a guy who reads LLM output and thinks "Yeah, that's good writing!" As if all those CalArts students were starting to take their art classes from ChatGPT and imitating LLM style instead.
Maybe in the future, maybe even in the near future, AI will improve enough to make this moot. We don't have LLMs that can write entire novels in one shot yet, and even with lots of prompting, the novels they can write are absolute crap. But I have no doubt people will read them, just like people read progressive fantasies and litrpgs that are absolute crap in terms of writing style (* cough * Reverend Insanity * cough * ) There is no accounting for taste, and some people don't actually care about style and craft and skill beyond basic get-the-job-doneness. "Give me words that tell a story, and make the story interesting. Give me pixels that form big round boobies and a waifu fuck-me face."
That's... fine, I guess? But don't mistake it for good.
I think that fact that you are defensive about this is kind of weird. Like you are insecure either about your own writing, or about the potential of LLMs, or about the intersection of those two things.
There's a writer on Medium I kind of casually follow for his trainwreck-of-a-life stories, and he gets dragged regularly for writing posts that scream "ChatGPT." He has admitted he uses AI for research, outlining, sometimes phrasing, but "he writes it all himself" and after another post that got a bunch of people calling him an LLM, he wrote a long, huffy, defensive post about how this is his writing style, this is how he's always written, ChatGPT is copying him, not the other way around, and fuck the haters. And, well, I guess I believe him if he says he's not actually letting ChatGPT write his posts for him (I don't, really, I think he's letting ChatGPT "outline" his posts and then he does some editing and tweaking and calls it "his writing"). But the degree of his defensiveness really convinced me he knows he's using too much AI in his writing.
Note that I am not saying you're doing the same thing, just... I think you know you're outsourcing too much to AI, and now you're getting pissy when people point it out.
On that note:
AI detectors are themselves not that reliable, since the ability to detect AI writing is a moving target, so posting "An AI detector said my writing is 100% human" is probably not that convincing to most people. (Just as many people have had the displeasure of seeing something they know they wrote themselves tagged as "almost certainly AI" by an AI detector.)
I do not think we should be using mod logs to tell people "You are a crap poster so I dismiss your argument."
I disagree! I do not think that the majority of LLM output is worth reading. That is not the same as LLMs being incapable of good writing. Getting something decent out of them takes effort. Not some kind of overcomplicated prompt engineering nonsense, but more effort than bad actors take.
To illustrate, I can truthfully claim that Xianxia as a genre is sloppy trash (most of it is) while simultaneously arguing that Reverend Insanity is peak fiction. The selection process is what allows for a recommendation.
As you can see, we have irreconcilable differences. Pistols at dawn?
I really can't win. If I stay quiet and ignore things: avoidant behavior. If I just say that, yeah, I've used AI, that is a no-contest. If I actually take a stand, then suddenly the lady doth protest too much. Nah, this lady has principles, and is willing to argue them.
I have heard claims that Pangram is better than most. For example, it's batting 100% here, admittedly, for a single sample. To the extent that people have used AI detectors on me in an attempt to shore up their argument that I'm using AI (in a post where I allude to the fact I'm using it), then I feel entitled to use them myself. If it works, then you believe in my probable innocence, if you believe it doesn't work, then you had no reason to consider me guilty beyond what I've already confessed.
More options
Context Copy link
More options
Context Copy link