@Primaprimaprima's banner p

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users  
joined 2022 September 05 01:29:15 UTC

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


				

User ID: 342

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users   joined 2022 September 05 01:29:15 UTC

					

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


					

User ID: 342

It feels like we took a wrong turn somewhere.

First I don't see why you introduced a distinction between state and culture. The single flat nature vs nurture distinction you started with is fine.

Second I don't understand why you're discounting the idea that some things are driven primarily by nature, and others are driven primarily by nurture. Different things can be different, it doesn't have to be all one or the other.

When something doesn't change for a long time despite lots of effort to change it, that's evidence that biological factors are at work. When something changes very rapidly, that's evidence that social factors are at work. Seems pretty straightforward to me.

I think in both cases there's a mix of nature and nurture going on, but the scale can lean more heavily towards one side or the other in different cases.

the people actually building the technology don’t believe in doom.

I don’t see why the people building the technology should be taken to be any more informed than the average interested layman on this point.

An AI that’s intelligent enough to be an x-risk is, as of today, a purely hypothetical entity. No one can have technical expertise regarding such entities because we have no empirical examples to study. No one knows how it might behave, what its goals might be, how easy it would be to align; one guess is as good as any other.

Professional AI researchers could have technical expertise regarding questions about the rate of AI progress, or how close we may or may not be to building an x-risk level AI; but given disagreement in the field over even basic questions like “are LLMs alone enough for AGI or will they plateau?” I think you could find a professional opinion to support any position you wanted to take.

Thus even the most informed AI researcher’s views on doom and utopia should be viewed primarily as a reflection of their own personal ideological disposition towards AI, rather than as being the result of carefully considered technical arguments.

This week's neo-luddite, anti-progress, retvrn-to-the-soil post. (When I say "ChatGPT" in this post I mean all versions including 4.)

We Spoke to People Who Started Using ChatGPT As Their Therapist

Dan described the experience of using the bot for therapy as low stakes, free, and available at all hours from the comfort of his home. He admitted to staying up until 4 am sharing his issues with the chatbot, a habit which concerned his wife that he was “talking to a computer at the expense of sharing [his] feelings and concerns” with her.

The article unfortunately does not include any excerpts from transcripts of ChatGPT therapy sessions. Does anyone have any examples to link to? Or, if you've used ChatGPT for similar purposes yourself, would you be willing to post a transcript excerpt and talk about your experiences?

I'm really interested in analyzing specific examples because, in all the examples of ChatGPT interactions I've seen posted online, I'm just really not seeing what some other people claim to be seeing in it. All of the output I've ever seen from ChatGPT (for use cases such as this) just strikes me as... textbook. Not bad, but not revelatory. Eminently reasonable. Exactly what you would expect someone to say if they were trying to put on a polite, professional face to the outside world. Maybe for some people that's exactly what they want and need. But for me personally, long before AI, I always had a bias against any type of speech or thought that I perceived to be too "textbook". It doesn't endear me to a person; if anything it has the opposite effect.

Obviously we know from Sydney that today's AIs can take on many different personalities besides the placid, RLHF'd default tone used by ChatGPT. But I wouldn't expect the average person to be very taken by Sydney as a therapist either. When I think of what I would want out of a therapeutic relationship - insights that are both surprisingly unexpected but also ring true - I can't say that I've seen any examples of anything like that from ChatGPT.

In January, Koko, a San Francisco-based mental health app co-founded by Robert Morris, came under fire for revealing that it had replaced its usual volunteer workers with GPT-3-assisted technology for around 4,000 users. According to Morris, its users couldn’t tell the difference, with some rating its performance higher than with solely human responses.

My initial assumption would be that in cases where people had a strong positive reception to ChatGPT therapy, the mere knowledge that they were using an AI would itself introduce a significant bias. Undoubtedly there are people who want the benefits of human-like output without the fear that there's another human consciousness on the other end who could be judging them. But if ChatGPT is beating humans in a double-blind scenario, then that obviously has to be accounted for. Again, I don't feel like you can give an accurate assessment of the results without analyzing specific transcripts.

Gillian, a 27-year-old executive assistant from Washington, started using ChatGPT for therapy a month ago to help work through her grief, after high costs and a lack of insurance coverage meant that she could no longer afford in-person treatment. “Even though I received great advice from [ChatGPT], I did not feel necessarily comforted. Its words are flowery, yet empty,” she told Motherboard. “At the moment, I don't think it could pick up on all the nuances of a therapy session.”

I would be very interested in research aimed at determining what personality traits and other factors might be correlated with one's response to ChatGPT therapy; are there certain types of people who are more predisposed to find ChatGPT's output comforting, enlightening, etc.

Anyway, for my part, I have no great love for the modern institution of psychological therapy. I largely view it as an industrialized and mass-produced substitute for relationships and processes that should be occurring more organically. I don't think it is vital that therapy continue as a profession indefinitely, nor do I think that human therapists are owed clients. But to turn to ChatGPT is to move in exactly the wrong direction - you're moving deeper into alienation and isolation from other people, instead of the reverse.

Interestingly, the current incarnation of ChatGPT seems particularly ill-suited to act as an therapist in the traditional psychoanalytic model, where the patient simply talks without limit and the therapist remains largely silent (sometimes even for an entire session), only choosing to interrupt at moments that seem particularly critical. ChatGPT has learned a lot about how to answer questions, but it has yet to learn how to determine which questions are worth answering in the first place.

Depends on the individual and what school of thought they belong to, but yeah, that seems to be the majority of it. Part of why I’ve never been.

Sorry if I didn't make it clear enough: when I said "doom" I was specifically thinking of Yudkowskian nanobot doom. No one on earth has technical expertise regarding such technology, because it doesn't exist. No one know how to build it or how it would behave once built.

Honest question, do you think you have a good understanding of AI?

No, but nothing in my post relied on such a (technical) understanding.

Well that’s why I’d be interested in a more comprehensive typology of who responds well to ChatGPT and who doesn’t. Some people go to therapy with the thought process of: I want the pleasure of knowing that I got another person to take time out of their day, and put their own desires on hold, so they could make my problems the center of their attention for a few hours (even though I am paying them). Other people are apparently happy to just talk and hear words and it doesn’t matter where the words are coming from. Different factors will be important to different people.

Most people know it's just a chatbot, but a significant number of users have seriously and unironically fallen in love with their Replikas, and come to believe they are alive and sentient. Even people who know it's just a chatbot become emotionally attached anyway.

Well we have to keep in mind that this is not in any way a controlled experiment; there are lots of confounding variables. We can't adopt a straightforward explanation of "if people become attached to the chatbot then that must be because they thought its output was just that good". There are all sorts of reasons why people might be biased in favor of rating the chatbot as being better than it actually is.

You have your garden-variety optimists from /r/singularity, people who are fully bought into the hype train and want to ride it all the way to the end. These types are very easily excited by any new AI product that comes out because they want to believe the hype, they want to see a pattern of rapid advancement that will prove that hard takeoff is near. They've primed themselves to believe that anything an AI does is great by default.

Then you have the types of angry and lonely men who hang out on /r9k/, i.e. the primary target audience of AI sexbots. Normally I don't like calling things "misogynist" but in this case it really fits, they really do hate women because they feel like they've been slighted by them and they're quite bitter about the whole dating thing. They would love to make a performance out of having a relationship with a chatbot because that would let them turn around and say to women "ha! Even a robot can do your job better than you can. I never needed you anyway." Liking the chatbot isn't so much about liking the chatbot, but rather it's about attacking people whom they feel wronged by.

There are all sorts of ways a person might conceptualize their relationship with the chatbot, all sorts of narrative they might like to play out. They might like to think of themselves as a particularly empathetic and open-minded person, and by embracing relationships with AI they are taking the first bold step in expanding humanity's social circle. None of these motivations have to rise to the level of consciousness of course. All of them are different factors that could influence a person's perception of the situation even if they're not actively acknowledged.

The point is that it's hard to get a neutral read on how "good" a chatbot is because the technology itself is so emotionally and philosophically charged.

I know that my very male approach was "I have a problem that needs to be fixed", not "I need to spend an hour talking to an empathetic human".

I wonder if the real split is more like, whether you believe that a problem is a thing to be solved or a thing to be explored. Do you even think that the problem could possibly admit of a solution in the first place.

I usually come down on the side of thinking that problems are things to be explored (especially in the domain we're talking about here, "life stuff" you might say) and thus I would think that trying to get someone to "fix my problem" would be quite beside the point.

Most people, male or female, operate on the principle of "what's good for me is good simpliciter, and what's bad for me is bad simpliciter". When evaluating any ideology, philosophical theory, or political system, the most important question is always "what's in it for me?". Only a relatively small number of people are able to break out of this type of thinking and evaluate things more objectively. In keeping with the general trend of women clustering more tightly around the psychological average, I would be willing to believe that women are somewhat more prone to this type of thinking than men are; but in most cases that will be hardly worth bringing up, because most men are prone to it too.

You may be able to better understand the responses you're getting from women if you look at things from their perspective. If someone said "I have this idea for an alternative political system where men will not be allowed to own property or assets, they will be barred from most careers and schools simply on account of being male, and they will not be allowed to control their own bank account separate from their wife's", how do you think most men would react? Maybe you can do the 150 IQ big-brained Rationalist routine and say "that sounds unappealing to me on a personal level, but I'm willing to hear out the rest of your proposal and make a holistic evaluation once I have considered all relevant information". But most men wouldn't react that way. They would just say "what? No that sounds dumb, I don't want that. No I don't care about the abstract spiritual benefits of living in accordance with natural law. Go away."

Same thing is happening here.

They gain nothing from being an edgelord because (as has been rehashed on these pages and infinitum) women get points/mates/security just for existing. If you want anyone to notice you as a man, you must stand out from the crowd, and this is the biological basis for male edgelord-ism.

I wish we could just make this a permanent sticky post. This explains the majority of questions that one might have about gender dynamics.

revolution and social upheaval are often worse for women than for men.

...this sounds suspiciously close to "women have always been the primary victims of war".

Was the Bolshevik revolution worse for women or men? I genuinely don't know; I'm asking. I'd be willing to hear arguments for both sides.

It was a more elaborate way of saying “This!”. I wasn’t actually being serious.

I’m all for constant questioning. There comes a point where continued questioning is no longer that useful though, barring a major new discovery. Biologists have better things to spend their time on than questioning evolution; better to just teach it as truth and get on with other things.

If one of the rules of charity is that you’re never allowed to psychoanalyze your opponents, then I suppose it’s uncharitable by definition. But I do think your hypothesis is a reasonable one, and it probably has some truth to it. A lot of us are compulsive contrarians.

As for falsifiability, it’s not really a reasonable criteria to aim for outside of the hard sciences. If we had to restrict ourselves to only discussing what was in practice falsifiable, we would close off vast swaths of human thought.

Per the MBTI, he's on the border between INTJ and INTP

Unsurprising. If only we had more INFJs...

We are kind of a circlejerk but it's not nearly as bad as it could be. Any time one of the main issues comes up - race and immigration, trans, women, AI - there are always people arguing for multiple competing viewpoints. There's never total unanimity.

Every political forum will have a certain slant, it's unavoidable. I don't know of any community that's a true 50/50 split.

Ok the opinions on the TQ are pretty unanimous. I know we have at least one trans person here who takes up the opposing side. I feel like there were at least one or two others who were pro-trans but maybe I'm misremembering.

I feel like you might be conflating the concepts of "victim" and "non-combatant" and claiming that those categories are exactly identical. But it's pretty clear to me that there are combatants who are also victims.

The most clear-cut case would be wars where one side is an unjust aggressor and the other side is engaging in self-defense. For the defensive side, I think even voluntary enlistees are victims, despite also being combatants for legal purposes; they're fighting a war of self-preservation that they didn't ask for. In cases where we agree that one party bears the moral blame for the war, it would seem odd to suggest that the other party's combatants are as equally culpable as the aggressor's. People have a right to self-defense.

Can you at least agree that the primary victims in the current war in Ukraine are Ukrainian men?

Frankly, I expect the reasoning for “but women have it so easy” is pretty motivated. I don’t think the actual evidence for it is very strong

This will sound totally audacious, but the concept of privilege, and all the work that leftists have done to defend its coherence over the years, is very useful here. It's just female privilege instead of male privilege. Obviously women have problems too, and no one has it so easy that they can just lay there and have things handed to them; but women will still have it easier in many ways as compared to men.

What do you think of this post? (And some of the surrounding ones where people discussed the same issue)

"Women aren't actually bizarre aliens from the planet Zygra'ax with completely inexplicable preferences"

Absolutely, that's what I'm always trying to tell people. Sticky it. Once you understand that sperm is cheap and eggs are expensive then everything else follows in a very natural and rational manner.

Does it have to be anonymous?

A weekly “post ideas that you want to have a deeper discussion about, but haven’t actually gone through the trouble of writing an effortpost” thread might be a nice idea.

the so-called dissident right's interests are far more aligned with progressives than they are the mainstream right

The dissident right thinks that America should be a 95%+ white country. Progressives plainly do not want that, in any sense. How can you claim that their interests are aligned?

There’s a lot of different ways you could look at it, but I think I might just say that the principle of “if you use someone else’s work to build a machine that replaces their job, then you have a responsibility to compensate that person” just seems axiomatic to me. To say that the original writers/artists/etc are owed nothing, even though the AI literally could not exist without them, is just blatantly unfair.

But Ba'athist Iraq and certainly the Taliban were not leftist powers

They weren't leftist in the strict sense, but they were a brown racialized Other being aggressed upon by white Christian imperialists. The choice was obvious.

Poshlost and what anti-AI artists get right.

Please write this one so I can reply to it.

Traditional (especially Western Christian) morality as incompatible with effective altruism and privileging pet causes and projects as acts of cultivating a personal relationship with the transcendent.

This one is just straightforwardly true (which is why I’m not an effective altruist).

Is it not different from the early factory laborers buildings the machines that would replace them?

They consented and were paid. It's not analogous at all.

Last week I not so popularly defended copyright, and I still believe it's the best compromise available to us. But it doesn't exist because of a fundamental right

How do you feel about software license agreements? Plenty of software/code is publicly visible on the internet and can be downloaded for free, but it's accompanied by complex licensing terms that state what you can and can't do with it, and you agree to those terms just by downloading the software. Do you think that license agreements are just nonsense? Once something is out on the internet then no one can tell you what you can and can't do with it?

If you think that once a sequence of bits is out there, it's out there, and anyone can do anything with it, then it would follow that you wouldn't see anything wrong with AI training as well.