This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I'm curious how the Motte sees using AI for therapy / life advice? Online I'm seeing a ton of people recommend Claude especially, but others are skeptical.
On the one hand I could see it being useful because of the fact that you have nigh-unfettered access to it, and can really dig into deep problems. Also, it's trained on all the therapy texts of course.
The other, more culture war issue, is that due to the way RLHF works, they will likely be pushing one ideological lens over another. Especially about deep topics like morality, relationships, casual sex, etc.
Overall I think it's a fascinating area of development, and I'm still optimistic that LLMs could help people much more than the average therapist. Mainly because I'm pretty bearish on the help people get from the average therapist.
Anyway, what do people think about therapy becoming AI?
Considering OpenAI's extensive, ahem, alignment efforts, I think using GPT in its current state as a therapist will mostly net you all the current-year (or past-year rather, I think the cutoff is still 2023?) progressive updates and not much beyond that. Suppose you can at least vent to it. Claude is generally better at this but it's very sensitive to self-harm-adjacent topics like therapy, and you may or may not find yourself cockblocked without a decent prompt.
I'm quite optimistic actually, in no small part because my shady source of trial Claude has finally ran dry last month and I hate to say I'm feeling its absence at the moment, which probably speaks to either my social retardation or its apparent effectiveness. I didn't explicitly do therapy with it (corpo models collapse into generic assistant speak as soon as you broach Serious Topics, especially if you use medical/official language like that CIA agent prompt downthread) but comfy text adventures are close enough and I didn't realize how much time I spend on those and how salutary they are until I went cold turkey for a month. Maybe the parable of the earring did in fact have some wisdom to it.
Despite my past shilling I'm so far
hypocriticallyvaliantly resisting the masculine urge to cave and pay OpenRouter, I don't think there's any kind of bottom in that particular rabbit hole once you fall in, scouring /g/ is at least more of a trivial inconvenience than paying the drug dealer directly.More options
Context Copy link
I could imagine using your trained therapy AI. But not using my own.
Counseling without outside authority is just you affirming your own traits. You have to be willing to accept what the authority says even if you don't like it. That's why I'd sooner talk to my priest than a shrink, I know guys in college who are shrinks now.
If you're training it yourself and zeroing it in to get outcomes you like, then you're just jerking yourself off.
Therapy doesn't require accepting anything on authority. It's not particularly hard to tear people down by their own judgement without asserting any of your own, just by pointing at the things they try to look away from. There's no reason a LLM couldn't be trained to do that.
Sure I see what you mean. Maybe I'm being imprecise due to lack of experience with therapy proper.
My concern is this. Tim thinks he's a woman. He works on training a chatgpt instance to provide gender affirming care for him to help him cope with beginning the process of transition. But Tim isn't a woman, by whatever definition you like, (even assuming MtFs "exist" one can still not be an MtF), he's simply delusional.
But because Tim is building the therapist prompt by prompt, if he hears from the therapist "Tim I don't think you're trans, I think you're delusional and using this to cope with xyz;" Tim will say "Oh rats, the instance is hallucinating, let's try something else."
Yeah, he's already gone wrong there. That's not a problem with therapy though so much as this person's attempt to blow smoke up his own ass and call it therapy.
The opposite doesn't really work either though. If a human therapist says "Tim, I think you're delusional", the most likely result is broken rapport and Tim shopping for a new more gender affirming therapist. Even if the therapist managed to get enough buy in that Tim doesn't walk, he's still left with "Okay, I'm delusional. Now what? I still feel like shit." rather than "Okay thanks, all better!".
You really have to come at things without a pre-prescribed ending point in mind. Like, "I feel really shitty every time I look in the mirror and see a man. I feel like I am a woman, and that doesn't match what I see. What do we do about this?", and finding out what to do about it as you seek to understand the issue together. I guess it's pretty non-obvious how to do this effectively, now that I write it out.
Sure but it's at least imaginable that every therapist will see Tim as delusional, where you're only a quick correction from "fixing" DrGPT.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You can do this with LLMs too. I'll probably write an effort post on it if I can stop jerking myself off for an hour.
More options
Context Copy link
More options
Context Copy link
One thing to keep in mind is opsec.
Sometimes therapy sessions include pretty personal data.
With a regular meat-based therapist, all sorts of regulations are in place to limit how the data gathered in session can be used. Crucially, such data can sometimes not be compelled as evidence. The fact that the data is mostly in his head also makes automatic analysis more difficult.
Note that a medical professional can still call the cops on you for being a threat to yourself or others, which is likely out of the scope of current LLMs. Also note that certain faiths have a much stronger protection of data shared in confession than medical professionals both in law and professional ethics.
By contrast, assume that if you do not run your LLM locally, your conversations are stored permanently on a server without your control. From my understanding, the big AI companies do not try to facilitate anonymous payments and usage (e.g. suitable crypto-currencies and communication over TOR), as this would invite all kinds of abuse.
To keep your intimate conversations linked to your legal identity secure, at least the following would have to be true:
(a) The staff of the AI company does not read them.
(b) They don't train other AIs on them.
(c) They don't get hacked.
(d) They don't get a subpoena for e.g. 'all conversations mentioning cannibal ideation' by police.
If you aren't a minor internet celebrity like Gwern, where a ton of your text is in the corpus or a lot of people talk about you, having your data trained on is a vanishingly small concern. People forget how ridiculously compressed LLMs are compared to their training corpus, even if you spill an amount of personal info, it has little to no chance of explicitly linking it to you, let alone regurgitating it.
Certainly you shouldn't really be telling AIs things you are very concerned about keeping private, but this particular route isn't a major threat.
That is true of course, but I read @quiet_NaN's comment as less concerned about models having their data "baked into" newer models during training (nowhere on the internet is safe at this point anyway, Websim knows where we live), and more about the conversations themselves physically stored, and theoretically accessible, somewhere inside OpenAI's digital realm.
While I'm sure manually combing chatlogs is largely untenable at OpenAI's scale, there has been precedent, classifier models exist, and in any case I do not expect the Electric Eye's withering gaze to be strictly limited to degenerates for very long.
More options
Context Copy link
More options
Context Copy link
This is the problem for me. I tried a fun little game recently with my OpenAI instance, in which I've still been careful about what I write. It still had a ton of personal information from our chat contexts and was able to do a decent job figuring out pressure points in my life.
If I can't have full opsec with a virtual therapist then it's as worthless to me as the judgemental lefty who will call the cops on me if I'm sad.
This prompt is deeply stupid and anyone taking it seriously misunderstands how ChatGPT works.
Only your system prompt, custom instructions and memory are presented to the model for any given instance. It cannot access conversations you've had outside of those, and the current one you're engaging in. Go ahead, ask it. If it's not explicitly saved in memory, it knows fuck all. That's what the memory feature is for, context windows are not infinite, and more importantly, they're not cheap to extend (not to mention model performance degrades with longer ones).
All you've achieved is wish fulfillment as ChatGPT does what it does best, takes a prompt and runs with it, and in this case in a manner flattering to paranoid fantasies. You're just getting it to cold read you, and it's playing along.
Well, that's disappointing:
/images/17319872848770623.webp
More options
Context Copy link
You seem to have misunderstood what I was saying here.
It didn't guess anything that I hadn't told it, it just extrapolated from the memory of multiple chat threads, asking about useful how-to topics. There's no magic.
The point is that even with my complete avoidance of anything truly personal, a platform has valuable information. If you're spilling your guts to a virtual therapist, It's a huge vulnerability.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I've mentioned this before, but I follow a few Replika subs and FB groups. These things are already being used for therapy. And while some people seem happy with their chatbot companions, a day doesn't go by when someone doesn't post, seriously distraught, that their AI girlfriend or boyfriend "cheated" on them, or didn't remember some important detail of their life, or behaved hurtfully. Some people really, genuinely think they are sentient and feeling, and some people are going to be really fucked up by relying on a chatbot for advice and human companionship.
I’m not convinced that this is worse than a human. There’s a fair number of patients of human doctors who believe that their shrink is in a relationship with them and some think they’re cheating. Keep in mind that the kind of person who would turn to therapy to fix themselves is likely someone with few friends and family to talk to and thus are putting a lot of eggs in the psychiatrist basket. That it’s going through AI is not really surprising to me.
More options
Context Copy link
More options
Context Copy link
Tremendously poor idea, general purpose chatbots have already led to suicides (example- https://amp.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death).
Purpose built ones will have more safeguards but the problem remains that they are hard to control and can easily go off book.
Even if they work perfectly some of the incentives are poor - people may overuse the product and avoid actual socialization, leaning on fake people instead.
And that even if is doing a ton of work, good therapy is rare and extremely challenging, most people get bad therapy and assume that's all that is available.
Services like this can also be infinitely cheaper than real therapists which may cause a supply crisis.
I'm afraid at least this particular example is wrong, and popular media grossly misrepresented what happened:
https://www.lesswrong.com/posts/3AcK7Pcp9D2LPoyR2/ai-87-staying-in-character
(Note that one of links has rotted, but I recall viewing it myself and it supported Zvi's claims)
Anyway, I have a more cynical view of the benefits of therapy than you, seeing it rather well described as a Dodo Bird Verdict. Even relatively empirical/non-woo frameworks like CBT/DBT do rough as well as the insanity underpinning Internal Family Systems:
https://www.astralcodexten.com/p/book-review-the-others-within-us
Even I have to admit that Freudian nonsense grudgingly beats placebo.
You seem to agree that good therapists are few and far between, but I'd go as far as to say that I'm agnostic between therapy as practiced by a good LLM and the modal human therapist.
Oh hey!
When you get a chance I would love to hear how things are going for you!
On to the matter at hand -
Please update my understanding of that particular suicide if it's incorrect, but what I'd heard is that the person was substituting human contact with the chatbot and his parents didn't catch the worsening social withdrawal because he was telling them he was talking to someone. My fear is not that chatbots will encourage people to do things, but that they won't catch and report warning signs, and serve as an inferior substitute for actual social contact. Not sure what the media presentation is since I'm relying on professional translation.
Moving beyond that however, I think you underweight the value of therapy. DBT and CBT have excellent quality evidence at this point. The reason for those two specifically is likely two fold - they are "simpler" to perform, and because they are more standardized they are easier to research.
Also, good psychodynamics is not Freudian nonsense, it's mostly CBT with different language and some extra underlying terminology that is very helpful for managing less severe pathology. Again I tell you to read Nancy McWilliams haha.
At its absolute worse therapy is stuff like forcing social interaction, forcing introspection and so on. Some people can function well off of a manual, and some people can study medicine on their own. But nearly everyone does better with a tutor, and that's what therapy is.
A tutor is also more likely to catch warning signs because of (at this time) superior human heuristic generation and the ability to perform a physical and mental status exam.
I've been rather miserable since I've gotten here, for a multitude of reasons, which had notably dampened my appetite for chatting up my day job online. I'm slightly less miserable right now, which is why I'm back at it! I can elaborate in DMs if you'd like.
I raised objections against claims made exceedingly uncritically in the Guardian post you linked to (having assumed you endorsed it). For example-
I can cut a grieving mother some slack, but the facts don't bear out her beliefs, and the Guardian doesn't really do much journalism here, since it would otherwise suggest her suit is unfounded.
Your personal claims seem more subtle, but even then, I find it very hard to blame the chatbot for social withdrawal here. I'd point out you can make the same argument for anything from reading books to watching anime (a bullet that some may bite, of course). In other words, a potential refuge for the maladjusted, but also something that the majority of people would be loathe to ask others to consume less of or ban altogether, on the grounds that it's a net negative.
(I think the case for social media being far worse for teenage mental health is significantly more robust, and I still wouldn't advocate for it to be banned. In the case of chatbots, I haven't been nudged out of the null hypothesis.)
Imagine the chatbot was replaced by, idk, a Runescape girlfriend (do kids these days have those? Potentially substitute for someone grooming them on Discord), would you expect said person to be significantly more helpful, or at least worthy of blame? I wouldn't.
I'll have to see if it's relevant to the MRCPsych syllabus, God knows that having an unpleasant time with the subject makes most reading on it feel unpleasant :(
A fair point. But I contend that an AI therapist is capable of doing those things, in a limited but steadily improving fashion. You can have a natural language spoken conversation with ChatGPT, and it's very capable of picking up minor linguistic nuance and audio cues. Soon enough, there'll be plug and play digital avatars for it. But I think that therapy through the medium of text works better than doing nothing, and that's the standard I'm judging chatbots by. Not to mention that they're ~free for the end user
God knows what the standards for AGI are these days, with the goalpost having moved to being somewhere near a Lagrange point, but I would sincerely advocate the hot take that an LLM like Claude 3.5 Sonnet is smarter, more emotionally intelligent and a better conversationalist than the average human, and maybe the average licensed therapist.
It is, of course, hobbled by severe retrograde amnesia, and being stuck to text behind a screen, but those are solvable problems.
To run with your analogy, an AI therapist/teacher is far closer to a human therapist/teacher than they are to a manual or textbook! You can actually talk to them, and with Hlynka not being around, the accusations of stochastic parrotry in these parts has dropped precipitously.
What I'm really advocating for is not letting the perfect become the enemy of the good, though I'd certainly deny that human therapists are perfect. I still think that access to AI therapists is better than not, and I'm ambivalent when putting them up against the average human one.
Though I'd also caveat that Character AI probably cheaps out, using significantly dumber models than SOTA. But it's not the only option.
I find it deeply frustrating to see orherwise intelligent people (who by all rights ought to know better) anthropomorphizing algorithms in this way.
In order to "be hobbled" by retrograde amnesia it have to be capable of forming memories in the first place.
An LLM is literally just a set of static instructions being run against your prompt. Those instructions don't change from prompt to prompt or instance to instance.
I genuinely don't understand the objection here?
Drawing an analogy isn't the same thing as excessive anthromorphization. The closest analogue to working human memory is the context window of an LLM, with more general knowledge being close to whatever information from the training set is retained in the weights.
This isn't an objectionable isomorphism, or would you to object to calling RAM computer memory and reject it as excessive anthromorphization? In all these cases, it's a store of information.
An otherwise healthy child born with blindness can be said to be hobbled by it even if they never developed functioning eyes. I'm sorely confused by the nitpicking here!
The utility of LLMs would be massively improved if they had context windows more representative of what humans can hold in their heads, in gestalt. In some aspects, they're superhuman, good luck to a human being trying to solve a needle in a haystack test over the equivalent of a million tokens in a few seconds. In other regards, they're worse off than you would be trying to recall a conversation you had last night.
You can also compare an existing system to a superior one that doesn't yet exist.
I never claimed otherwise? But if you're using an API, you can alter system instructions and not just user prompts. But I fail to see the use of this objection in the first place.
More options
Context Copy link
More options
Context Copy link
Anterograde, not retrograde. It didn't forget something it knew from its life before; it's unable to permanently remember new things. LLMs are like Clive Wearing or Hermione Granger.
Isnt anteretrograde just prograde
More options
Context Copy link
An LLM can be loosely said to have both kinds of amnesia. It has retrograde amnesia in the sense that any information it had in its context window becomes "forgotten" when too much new information is accepted and overrides it. Or simply a conversation it had in a previous instance, treating different copies as the same entity.
More options
Context Copy link
More options
Context Copy link
Will message you.
And yeah no doubt the media fucking sucks.
My fear is that people will engage in HER style stuff and this example is a bleeding edge version of that.
McWilliams is useful even if you are just skimming the personality disorder chapters because you will have colleagues with those. It's also interesting enough to make you go through it at pace haha.
I think things like your therapist looking at you like you are an idiot and you going "yeah I know" are underrated parts of therapy and the chatbot isn't going to do those things for now.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Makes me wonder if you're the Scott Alexander alt because this is clearly a mental health practitioner's opinion. All LLMs go off the rails if you keep talking to them long enough, that's a technical problem to be solved in the next year or two, not a reason that human therapists should have jobs ten years from now. OpenAI has already made it a non-issue by just limiting ChatGPT's context window, you'll see this issue more on models that let you flood the context window until the output quality drops to nothing.
Just FYI, a lot of people would much rather spill their guts to an AI than to another human. Also, one of the most common kinds of stress people face is financial stress, and for these people paying for a therapist will cause more stress than it will ever resolve. Mental health professionals are much more useful to the people that need them most when they are free. Far more people will kill themselves due to not getting expensive human attention than will ever kill themselves because their cybertherapist told them to.
Haha I am a physician but I am not Scott and disagree with him on a large amount of his medical opinions.
I think you make a very fair point about access, and I don't have a good counterargument but it is worth noting that people excessively overweight their ability to manage their own health (including health care professionals who have lots of training in knowing better).
I guess the best argument I have is that these days a lot of mental health problems are caused by socialization adjacent issues and solving that with an advanced form of the problem is unlikely to be an elegant solution.
What do you disagree with him on?
The two things that stick out to me the most are his whole distaste for the FDA and his intense dislike of inpatient psychiatric stays.
The FDA does a lot of good and a lot of bad but the ratio is aligned with what we mostly value.
IP is important, I feel like he probably doesn't have enough ED experience and must have worked with shitty hospitals.
Granted the last time I looked at either of these opinions from him was in like 2017? So not sure if he has updated or I'm misremembering.
Also some boring Pharm stuff I remember reading back in the day but I'm guessing his views have changed a bunch and I haven't read much on the new site, dont want to hold that against him lol.
I'm curious as to which of his opinions you disagree with? I personally can't recall anything I've read being obviously wrong, but I would hardly call myself an expert yet!
I only vaguely remember, this opinion formed back when I first discovered Scott which would have been during Trump's original run when most reputable sources of information died.
Probably anything to do with Insomnia, hypnotics, and especially melatonin. That line of research and guidelines is hideously complicated and in the U.S. at least has no clear consensus.
Any stance is wrong lol.
Hmm.. I actually went into depth on melatonin recently for a journal club presentation, and looked into the papers Scott cited. It seems quite robust to me, at least the core claims that 0.3 mg is the most effective dose, though I don't know how that stacks up with current higher dose but modified release tablets (those are popular in the NHS).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
ED = eating disorder, in this context?
Emergency Department, that's often where the absolute worst psychiatric crisis happen - people who are high as fuck (and eventually calm down before they get to Psych) or incredibly decompensated (and get snowed with medication before they get to psych).
Outpatient Psych types in particular often forget just how bad things can get because the kind of patients who really need inpatient management end up being too disorganized to be seen outpatient and get disposed first to the ED, prison, or state level hospitals. .
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I just can’t get excited for AI therapy because honestly, unless you have literally nobody in your life to talk through things with, there’s no value to therapy. I just don’t see people with long-standing issues get better because they had therapy. In fact, some people have therapy for multiple years without ever getting to the point of not needing therapy anymore.
I’m very much of the Stoic/CBT/Jordan Peterson school of therapy. Over focus on feelings and overthinking problems not only does not work, but quite often makes your original issues much worse. The key to getting better (barring something organically wrong with your brain — and that’s fairly rare) is to get out of your own head and get into taking productive actions to make your life better. Feeling bad about yourself is much better treated by becoming a better person than by sitting around trying to convince yourself that just because you haven’t ever done anything useful doesn’t mean that you’re useless. Get out there and start building, fixing or cleaning things. You’ll get over feeling worthless because you’ll know you did something useful.
Maybe your sample just isn’t representative? I don’t know anyone who claims their life turned around after confession, either, but that doesn’t mean it can’t happen.
I mean, I don’t know anyone who claims they decided to turn their life around after confession either. The usual claim is deciding to go to confession after making up one’s mind to turn their life around.
More options
Context Copy link
Confession is not the same as a long conversation with a pastor or priest about serious life issue or worries. I tend to think that people who know you well will give better advice than someone whose paycheck depends on telling you things that will make you happy.
More options
Context Copy link
More options
Context Copy link
Uh, isn't the evidence that therapy- or at least forms of therapy- is genuinely helpful to people with actual mental health issues- or at least some subset thereof, eg PTSD- pretty ironclad?
Thankfully I do have my effortpost/AAQC on the topic handy:
https://www.themotte.org/post/983/culture-war-roundup-for-the-week/209218?context=8#context
(In short, yes)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
How do you ever use them for therapy? I tried to use chatgpt3.5 for roleplay, set up command for rewind which are too complex for it. If it misunderstood me, and i corrected it, it wss still in a "poisoned" state, and often it tended to forget at all that it supposed to do
I'll echo the responses below and say that 3.5 is... suboptimal, much better and nearly as accessible alternatives exist. Claude 3 is the undisputed king of roleplay and I've sung it enough praises at this point, but it is much less accessible than GPT, and to be fair 4o is not actually that bad although it may require a decent jailbreak for more borderline things.
Aside from that, RP-related usage is best done through API (I believe you can still generate a GPT API key in your OpenAI account settings, not sure how you
legitimatelyget a Claude API key) via specific frontends tailored for the task. This kills two birds at the same time - you get mostly rid of the invisible system prompts baked into the ChatGPT/claude.ai web interface, and chat frontends shove most of the prompt wrangling like jailbreaks, instructions and Claude prefills under the hood so you're only seeing the actual chat. Frontends also usually handle chat history more cleanly and visibly, showing you where the chat history cuts off in the current context limit. The context limit can be customized in settings (the frontend itself will cut off the chat accordingly) if you want to moderate your usage and avoid sending expensive full-context prompts during long chats, in my experience 25-30k tokens of context is the sweet spot, the model's long-term recall and general attention starts to slowly degrade beyond that.Agnai has a web interface and is generally simple to use, you can feed it an API key in the account settings. SillyTavern (the current industry standard, as it were) is a more flexible and capable locally-hosted frontend, supporting a wide range of both local and corpo LLMs, but it may be more complicated to set up. Both usually require custom instructions/prompts as the default ones are universally shit, unironically /g/ is a good place to find decent ones.
Beware the rabbit holeFeel free to shoot me a DM if you have any questions.Thanks! Unfortunately I'm too depressed to check it... do they still need jailbreak prompts and update jailbreak regularly?
Kind of, but it's not as big a hurdle as you imagine it to be, though you do have to at least loosely keep up with new (= more filtered) snapshot releases and general happenings. It also depends on the exact things you do, you probably don't need the big-dick 2k token stuff for general conversation, ever since I burned out on hardcore degeneracy I haven't really been updating my prompts and they still mostly work on the latest GPT snapshots when I'm not doing NSFW shit.
As for jailbreaks, this list is a good place to start. Most jailbreaks come in the form of "presets" that rigidly structure the prompt, basically surrounding the chat history with lots of instructions. The preset's .json can be imported into frontends like SillyTavern with relatively little hassle, the UI can be intimidating at first but wrangling prompts is not actually difficult, every block of the constructed prompt has its own content and its own spot in the overall massive prompt you send to the LLM. Example. The frontend structures the prompt (usually into an RP format) for you, and during chat you only need to write your actual queries/responses as the user, with the frontend+preset taking care of the rest and whipping the LLM to generate a response according to the instructions.
Unless you're just talking to the "bare" LLM itself, this approach usually needs a character card (basically a description of who you're talking to), I mentioned those in passing elsewhere.
To contextualize all this, I unfortunately have no better advice than to lurk /g/ chatbot threads, it's smooth sailing once you get going but there's not really a single accessible resource/tutorial to get all this set up (maybe it's for the better, security in obscurity etc).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The differences between 3.5, 4o, 4o-mini, and o1-preview are pretty amazing. The "poisoned" state is pretty much still there -- the "draw a picture, but make sure there isn't an elephant" problem.
That said, there are ways of getting around this from an API perspective. I was toying with the idea of doing an RPG just for fun. The thing is that you can't have all of this in one giant chat because it will, as you've experienced, go off the rails eventually.
If I got off my butt and did this, the way I perceive as the most likely to succeed is to use it in conjunction with a wrapper to keep memory and a better sense of history. The reason I think this is because the number of tokens used for input (which is the entirety of the chat) is a really inefficient way to capture the state of the game. I think it's similar to running a game yourself. You have the adventure you're playing, and have a couple of pages of notes to keep track of what the players are doing.
The prompt per turn needs to take into account recent history (so things don't seem really disjointed), roughly where you are in the adventure (likely needing some preprocessing to be more efficient), and the equivalent of your pages of high level notes.
Running this with 4o-mini might actually work and be reasonably cheap.
Do you mean that naively entire conversation gets gets passed to LLM for each step, making it O(N^2) which grows very quickly?
More options
Context Copy link
More options
Context Copy link
There's a qualitative difference between the RP ChatGPT 3.5 and later models can do. The latter are much better, in terms of comprehension and ability to faithfully play a role.
I'd recommend Claude 3.5 Sonnet as the very best in that regard. I expect your attempts would be much more successful if you gave it a shot. I can at least attest that it's the only LLM whose creative literary output I genuinely don't mind reading.
More options
Context Copy link
More options
Context Copy link
What's the mechanism for useful therapy? Is it hearing good advice from an actual human, or is it hearing advice that unlocks subconscious truth? I'd suspect the latter in which case LLM's may be perfectly suitable, particularly for people who don't want to reveal their inner darkness to another person. However, maybe revealing one's innermost thoughts to a living judge is what gives the therapy depth and meaning.
Apparently just reading a David Burns CBT book is enough to cure most peoples depression, so I would guess if it copy that experience it should be pretty revolutionary for anyone willing to use a chatbot as a therapist (this is the biggest obstacle)
More options
Context Copy link
More options
Context Copy link
I occasionally use an LLM (LLaMA) as a therapist. If I’m feeling upset or have a specific psychological issue I want to get a better perspective on I will just go on there and explain my situation and ask for answers in a style I like (usually just asking them to respond as a therapist or an evo psych perspective or something like that.) When it gives me an answer that is too woke I will just say that the answer sounds ideologically motivated and I’d rather it would tell me the hard truth or a different perspective and 90% of the time it will give me a less annoying answer. I have done real therapy a handful of times in my life and the experiences have ranged from very annoying to somewhat helpful, I don’t like speaking honestly about myself to other people and especially not professional strangers. So I prefer to speak to an ai who can’t judge me and which doesn’t make me feel like I have to judge myself when sharing as well.
I can be creative with the prompting as well which I like, like I can think of whatever character or personality I’d want to get advice from and with a short prompt the ai can mimic whatever perspective I want.
I see it as useful for me, as a grown man who understands how ai and therapy are meant to work broadly, but I don’t think it should replace real therapy for most people (like children or the elderly or normal people who are fine with talking to human beings.)
Tequilamockingbird’s point below about the ai providing validation seems valid though. I could easily prompt the ai to just agree with whatever I’m saying and always tell me I’m right and everyone else is wrong so I try to avoid that failure mode, rather seeking more objective views or explanations of my issues rather than just what would make me feel more right.
Llama is probably the way to go if you care about privacy at all. But yes I agree that it can be useful as a sounding board, not to take real advice from though.
More options
Context Copy link
More options
Context Copy link
I am reminded of the 4chan greentext about the anon who hires a prostitute to talk to about his problems because her hourly rate is cheaper than his insurance copay for a real therapist.
I've unironically done this. prostitutes tend to be very open-minded people who are also good at keeping secrets. downside is they can't really give you any meaningful advice, but i don't think that's really the point of therapy.
More options
Context Copy link
Plot twist: prostitute is a student/has a degree
Better yet, the prostitute is a therapist moonlighting for extra cash.
Good joke. Everybody laugh. Roll on snare drums. Curtains.
There’s a Roald Dahl short story with a similarly structured (much tamer but profoundly uncomfortable) punchline.
O. Henry’s “Gift of the Magi” comes to mind, too, but that one is heartwarming.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Therapy is inherently opinionated. I can't see an LLM offering any deep insights because deep insights are sharp and cutting. LLMs are soft.
But, They are good for Reddit tier sanity checks. "My parents used to beat me within an inch of my life. Is that abuse?"; Yeah, an LLM will help with that. But so will Reddit. LLMs can be especially useful here if it is too embarrassing to post even as an Anon.
Overall, It serves as a great 'intake specialist' and friend. Not so much therapist. Great resouce for intial direction and to riff off. Emotional or otherwise.
I wouldnt trust it past that point.
Where are you getting this idea? It's certainly not true.
More options
Context Copy link
Does regular therapy actually do more than that? Most of the value (unless you’re literally diagnosed with a real mental disorder) is in hearing yourself talk about the problem. It’s probably no better or worse than talking to a friend or clergy or a parent. Even journaling generally helps to get things off your chest and often just putting down on paper the stuff that happened or that’s in your head can give you insight.
Therapy is probably worse than talking to a parent/pastor/friend, because therapists are paid strangers who’ve been trained to see every problem primarily in terms of feelings.
Therapists also are financially incentivized not to "fix" you. This is a danger in a lot of professions, but due to the nature of mental health, it's not nearly as obvious when there's malpractice – at least you can tell when a doctor has failed to set your leg. And it's also easier for someone to fail to do what's best for the patient in subtle ways; a surgeon might make a mistake or cut a corner out of tiredness, but it's even easier to deceive yourself about your own motives and undermine your patient for any number of reasons, as a therapist.
I'm not saying all therapists are evil, greedy, or even useless. But the incentives flow in directions that really should make you think twice and very carefully before you reach for your wallet.
I don’t buy into therapy. But if I was going to do therapy, I would want clear actionable goals for what I want to achieve and when it is achieved.
More options
Context Copy link
More options
Context Copy link
They also tend to always say that everything will be ok; turns out everything did not turn out ok. I could have bought an awesome drone for the money I spent getting lied to and it would have brought me more happiness long-term.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Barring a the development and deployment of true Ian Banks-esque AGI I think that it's an absolutely terrible idea that will do more harm than help.
Unfortunately the very qualities that make existing generative AIs a terrible therapist are also likely to make them a popular choice as one, as IME most lower-functioning individuals don't want "help" as much as they want validation.
So in the interests of "success" (from the pov of people selling AI therapy) that will be the focus of the training data, as without model permenance and/or a functional theory of mind, things like accountability and the identifying of underlying issues will be largely off the table.
More options
Context Copy link
I'd assume most therapists would hold to similar ideological lenses unless you sought out a therapist directly and explicitly sponsored by a church or similar institution.
I think if the AI could hold a memory of previous discussions it would probably be as good as any therapist in most aspects, possibly better because it's available 24/7 at a low price, but possibly worse in minor aspects in that it can't read the unspoken cues of body language, vocal tenor, implicit context etc and it can't sympathetically offer a box of tissues and a cup of tea. This is assuming the patient is actively engaged and moderately literate and intelligent enough to be able to guide their own sessions. I expect less able patients would need the kind of prompting and closer attention that is more suited to an IRL interaction.
I also expect there are big blind spots I'm missing like how to transfer "notes" from online sessions to an IRL therapist, or how to alleviate a crisis that rises above an initial need to talk things over.
Of course there's also the risk of someone who is already mentally unstable talking to a fake person that's been programmed to be agreeable, but if AI therapy was properly established in its own right I assume it would be operating under a custom prompt to better tune it to the task.
Have the Therapist-LLM write post-interview notes like a psychiatrist would. You could even come up with a code to signal a red-flag status to the IRL therapist in a way that would be invisible to the patient reading their own note. (e.g. it being written in italics or all-caps could mean that the red-flag detector was triggered during this session)
More options
Context Copy link
More options
Context Copy link
What is the obsession of Americans (and unfortunately younger Gen Zs in Europe too) with therapy?
Anyway my opinion is - LLM delivered quackery will be as efficient as the human one. But at least will be substantially cheaper.
I think it's probably that people just have fewer friends and social interactions now. Therapy has jumped in to fill the gap that socialising, communal worship, hobbies and sports have left. Combine that with safetyism and I can see how we'd end up with a situation where a young person feels lonely or like his life lacks meaning and will end up talking to a state sanctioned professional, when what he really needs is to hang out with his friends more.
What friends, though ?
~15% of Gen Z have no close friends, that's probably a serious portion of people who feel lonely.
More options
Context Copy link
More now than when? I agree with you on some level (what you say seems undoubtedly true at least in terms of real-world interactions as opposed to say, MMPORPG or whatever) but as someone who was a kid in the 70s and teen in the 80s there was a lot of therapy talk even then. Maybe just in Hollywood? Because I have some pathology where I remember things, I recall clearly the lines from the 1989 film Sex, Lies and Videotape:
The talk therapy boom, at least in the US, arguably seems to have started from the mid 20th century (when "shellshock"morphed into PTSD) and has just ballooned since then. I'll be the first to say I'm out of touch with current US norms, but I certainly remember the ethos of "Talk it out" even from childhood.
More options
Context Copy link
I was joking couple of months ago that when guys need therapy they need to do with their best friend two hours of hiking, two hours of lifting, two porterhouse steaks and two bottles of bourbon.
More options
Context Copy link
Also there's a narrative that everyone is broken or suffering from trauma in some form and thus EVERYONE needs 'healing' to manage their lives. And people who deny needing healing are the most broken of all! So they work from the assumption that anyone who hasn't gone to therapy must be broken, and thus therapy will help fix things... even if that person had a perfectly normal, healthy upbringing.
I say this as somebody who used therapy to get over a bad breakup. It helped me work through some things, get my emotions out, process my own role in the events and my own personal failings and then... get back to real life quicker. Its a tool! If it works, you should eventually be able to stop using it.
But end of the day it led me to conclude that I'm doing almost everything 'right' and have an accurate world model and generally a normal response to life events... and its EVERYONE ELSE who needs to get their shit together.
This Eliezer Yudkowsky tweet lives in my head rent free.
I think a lot of people use the need for therapy or the fact that they're in therapy as an excuse to not address actual life circumstances that are holding them back.
And by the same token, if their therapist isn't pushing them to address or change their life circumstances, they're probably just there to collect a check and make the person feel like they're doing something constructive.
I don't know if LLM therapists will suggest actual proactive steps to improve life circumstances.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why on earth would you talk to either therapists or AI for advice? The ordering of who to go to should be something like elders->good friends->randos->the denizens of your dreams->homeless crack addicts->unfeeling algorithms->anyone who charges for advice.
I keep hearing radio ads for “California Psychics.” I totally believe they’re less useful than AI.
More options
Context Copy link
With a nod to the humor in your post, the answer seems obvious: Lack of judgment. This phrase can be read as a double entendre of course but I mean the lack of feeling as if your interlocutor is holding gavel and ready to bang it the moment you unburden yourself. That feeling diminishes basically as you move from left to right in your scale there.
IME homeless people give better advice more cheaply and less judgmentally than therapists.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link