@biboo-taxation's banner p

biboo-taxation


				

				

				
0 followers   follows 0 users  
joined 2025 August 05 06:38:31 UTC

				

User ID: 3871

biboo-taxation


				
				
				

				
0 followers   follows 0 users   joined 2025 August 05 06:38:31 UTC

					

No bio...


					

User ID: 3871

I'd like to think I'm more neutral than most on this issue, I don't have strong feelings about ice in general. But I would say if you gave me that prompt I might have responded similarly, but that was definitely not the initial reaction I felt when I watched the video.

In the video I see a woman approach an ice officer with her phone recording him. I'm sure this is annoying but she does not look like a physical threat. The ice officer responds by shoving the woman to the ground. A man sees this and moves between the officer and the woman with his hands up possibly making contact with the officer but not in an obviously aggressive way. He is then immediately pepper sprayed. He falls to the ground, is dog piled, disarmed, and eventually shot.

I think there's two big reasons why the public is reluctant to blame him for what happened. First is the fact that he appears to be trying to defend the women that was shoved. You can argue that what he did was dangerous, but putting yourself in danger in order to protect someone is generally seen as honorable.

Second the situation just escalates so quickly. In the Good case people were arguing it doesn't make any sense to shoot at the driver of a car that is going to run you over since it's going to run you over regardless. The response was that it was a reasonable response in a split second situation where people won't make perfect decisions. Well, this guy made a split second decision to stop a woman from being attacked (from his pov) and almost immediately triggered the chain of events that leads to his death.

To be clear I'm not trying to say which side is responsible for what. I'm just saying I don't think the argument that the man shares the blame for his own death for intervening in police activity is going to be a compelling argument to people who watch the video, and asking chatgpt isn't going to explain why that is.

I'm someone who has neither gone to therapy nor confession, but the topic interests me because I'm confident that neither would do anything for me and yet everyone else seems confident in the reverse. The differences you've listed out were interesting to read about, but I would assume that the person who is claiming therapy is the new confession would say those are the superficial differences. The dodo bird verdict suggests that therapeutic methodology doesn't really matter as much as the "therapeutic relationship". My (perhaps flawed) interpretation of this result is that it just makes people feel better to talk about their problems and their feelings and for someone to tell them things are or can be okay. I've never understood this because it doesn't seem to have the same effect on me if I know the underlying problem is likely there unchanged. But this seems to be the fundamental connection between therapy, confession, even AI "therapy".

I wonder if this is really true. Let's say you're on your deathbed, everyone you personally care about is dead, you have the option to give yourself a bit of morphine on the way out but it will make everyone currently alive not want to have children and as such humanity will die out in a few generations . Do you press that button? I consider myself rather nihilistic but I wouldn't press the button, so I have to assume I have some preference somewhere for humanity to continue on. This means it's not actually a categorical preference but one based on trade offs.

Today, OpenAI released a new update that will put more mental health guardrails on ChatGPT. I'd been hearing about chatbots inducing psychosis and I'm assuming it was a response to that. But looking into the topic more I'm astounded how much of average people's mental health is becoming tied to these chatbots. Not just people with severe mental illness but perhaps even the majority of all people that use chatbots.

A recent survey shows:

  • 49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.

  • 73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.

A quick browse of reddit down this particular rabbithole quickly makes me realize how many people are already talking to chatbots as a friend/therapist. My impression is that its similar to the early days of online dating. People are sort of embarrassed to admit they has an AI friend, but with numbers increasing rapidly the younger you go. I've seen numbers between 10 and 50(!) percent of young people have used AI for companionship.

In retrospect, it would be shocking if AI therapy didn't take off. Probably the biggest barrier to getting therapy is cost and availability. Chatbots are available 24/7, essentially free, and will never judge you. The rate of mental illness is rising particularly among young people so the demand is there. But it's not just that, the idea of therapy is ingrained into today's culture. There's a sense that everyone should get therapy, who among us is truly mentally healthy, etc. I could easily see it becoming as ubiquitous as online dating is today.

I admit personally I'm relatively skeptical of therapy in general. As I understand it, it doesn't really matter what therapeutic method you use the result are about the same, so probably most of the benefit comes from just having someone you can vent to who is empathetic, and won't judge or get bored. If that's the case then AI therapy is probably as good or better than human therapy for cases that are not severe. On reddit I see a lot of comments that say that AI therapy has helped them more than years of human therapy and I can believe that.

So if AI therapy is helping so many people is that a good thing? I see a lot of parallels between AI therapy and AGI's alignment problem. I believe people when they say they went to therapy and they report feeling better. I'm not really confident that they came out with a more accurate view of reality. Recently I want down another tangentially related rabbithole about an online therapist that goes by the name of Dr. K, who has publicly streamed their therapy sessions (for legal reasons he doesn't actually call them therapy sessions). The thing that struck me is just how vulnerable a state of mind people are in during therapy, and the very subtle way that assumptions about reality can be pushed on them.

So if you consider how impressionable people are when receiving therapy, and how its becoming increasingly common for adults to use chatbots for therapy, and how it's becoming increasingly common for kids to grow up with chatbots as friends, then it really makes the potential impact of subtle value assumptions in these models loom large.