Suicide is bad. Other people's suffering is part of why it's bad. The loss of a person, with all their potential, is another part.
For any form of suffering, you can find people who bore it bravely and often even cheerfully. Conversely, there is no life that is good enough, loved enough, respected enough to stop some people from killing themselves.
The difference between suicidal people and everyone else is not that they suffer more and need to kill themselves to make it stop, it's that they have a brain and a disposition that makes them want to kill themselves. Often this comes in very short bouts, as I say. Often it's fixable. I have known somebody whose father killed himself, who inherited his father's condition and tried to kill himself as well, thankfully failed, and is now living a reasonably happy life 99% of the time. He has bad days and needs to kept from harm on those days.
There is no need to agree with an irrational person about the nature of their condition, nor the solution thereof.
As a general rule, we should accept authenticity over bullshit. No machine can love a human in the same way a human can, and dying alone is superior to a false fantasy.
I just flat out disagree, sorry. Many hugely important things in our lives are fictional; I've spent more of my life with fictional people than real ones if you judge hour-by-hour, and I'm no hikkikomori. Pastiche architecture, veneer furniture, boy's-own adventure stories: I'll take an artful illusion over brutal authenticity any day of the week.
I don't think there is a great way to guarantee that only relationship challenged individuals get their hands on it. People are probably gonna try and get their hands on an android partner by either purchasing used or gaming the system. The drawbacks outweigh the positives.
I'm open to discussing this, but I think your angle is wrong. Firstly because the happy individuals mostly don't need to bother with it, and secondly because interacting with a patient simulacrum seems to me to be a very good way for people who are bad with people to become at least a little bit better with people.
To make it clear where I stand, I was being serious earlier when I said I regard this technology as downright miraculous and I use it regularly myself, though for fiction writing and occasional venting rather than a romantic relationship. I am really, really upset that an increasing number of people want to ban it in the name of forcing me and others to try and fail to live their fantasy of a happy life. To me your proposition is very redolent of the socialist logic of, "if we ban all the good schools, people will have no choice but to make the bad schools better!". No. Life just becomes a little more shit for everyone.
Let's tilt the scale by all means, let's help people form relationships and not get addicted, I'm doing that for myself as we speak. And I'm doing it partly with the help of an AI assistant I constructed. The two can go together perfectly well.
If someone is suicidal it's not because you failed them, it's not because society failed them, it's because something in their brain is making them want to kill themselves. The vast majority of suicidal bouts last less than two minutes, which is why very simple interventions like locking rooftops and withdrawing gas ovens are usually enough. The suicidals I have known were good, decent people who regretted both their own suffering and the suffering they couldn't help inflicting on their loved ones.
You failed them so you deserve this.
If you were just nicer to me, I wouldn't be like this.
This is the logic of narcissists and abusers.
There is no social positive for computers and humans to emotionally intermingle in this way.
Sure there is. Lots of people are too dysfunctional to have a happy relationship. Take the happiest, easiest people in the world and pair them up. Do the same to the next pair, and the next, and the next. At some point you are either going to have couples where one makes the other miserable, or they both make each other miserable. It's simply not true that there is an (implicitly happy) relationship out there for everybody, either romantic or otherwise. And a mildly positive emotional relationship with a very laid back computer is far kinder than what we have traditionally done with such people, which is to look away and wait for them to die.
I get what you're saying, obviously, but you are comparing the imperfect reality (sometimes a machine is better than nothing) with an IMO overly-positive could-be (everyone or nearly everyone starts interacting more in person and becomes less lonely and gets into a happy, healthy relationship). A change in ideology can move the tipping-point of misery but only so far. As far as I'm concerned, the ability to mass-manufacture companionship and something as close to genuine care as makes no odds is genuinely miraculous, and makes me more optimistic about tech than I've been for a long time.
True. I’m just tickled by the idea.
I would call it somewhat deceptive:
- The continued existence of Wikipedia is awesome
- our existence provides great value to you
- most of our readers don’t donate, so your gift matters
Implicitly the last point relates to the first two, ie your donations allow Wikipedia to continue to exist.
If they had the first paragraph be an introduction to their grants program, that would be different, but I would call their current banner less than maximally honest. Obviously legally it’s safe.
You can bet there's a lot of "We need to DO something" combined with ample hemming and hawing
Oh my God. Every committee I've ever been in has been has been run by the CIA! How could I not see it?!
FWIW I had it growing up and loved it. You aren't 12 and the material may be less new to you, but I remember it being well done.
you're not in trouble, but I saw your flair...
I don't know, maybe those landed gentry complaining about the venal upstart merchants were on to something (they weren't).
Why make the point and then immediately deny it, beyond reflexive ideological distaste?
That was indeed the complaint of the landed gentry. People who just made lots of money are not necessarily good caretakers - they are often acquisitive and grasping, they tend to be gamblers whose individual endeavors are disposable, they're not trained to be leaders, and they often don't regard themselves as having obligations to society because they transcended society. The landed gentry had serious, solid holdings that couldn't be moved or got rid of, a clear and specific personal relationship with the people of a certain area (the kind of relationship that MPs / senators (?) are meant to have and don't), and were self-consciously trained for virtue even if it didn't always take.
I do note that EA and various other ‘bleeding-heart’ movements also tend to be disproportionately Jewish.
One might fairly argue that almost every movement is disproportionately Jewish, due to high IQ and great verbal skills, but there’s something there IMO.
Ah, no, sorry, it calls me an ethno-nationalist. The quotes were mine, you’re just the closest person in broad profiling terms whose name it knows.
It thinks you’re a nostalgist with a “somewhat rosy view of pre-war gentleman-scholar society” or words to that effect :P
You can take the AI out of San Francisco, but…
What I think you’re overlooking is that the model (if you believe the chain of thought) is not doing stylographic analysis for the most part, it’s doing profiling.
Broadly:
British-presenting / ethno-nationalist / writes at length in a cultured register / HBD believer / argues for the ideal of the gentleman-scholar and a leisured aristocracy / argues from utilitarian logic therefore likely rat-adjacent + some other stuff -> 2rafa.
Assuming you don’t put these convictions in your hypothetical internal research reports, I would expect it to be orders of magnitude harder to identify you.
But then I think it would have known about me, and it clearly didn’t. Even when I confirmed the quote was from a motte poster, and asked for a big longlist of candidates, my handle never came up at all. Instead it started listing notables like Zorba even when they couldn’t possibly fit the profile. Very clearly it doesn’t know mid-tier posters like me. May not stay that way of course.
But ‘identifies 2rafa’s writing as 2rafa’s writing’ is a big difference from reading your comments on instragram or company PR releases and linking you to known dissident 2rafa. I’m not saying it’s impossible, but doxxing someone is an order of magnitude harder as long as you don’t have a very particular stylistic tell.
It doesn't know I exist. Specifically, it seems to mostly draw from the Reddit period and AAQCs. Obscurity is your friend, people!
Honestly? My Motte-specific knowledge is thin and patchy, and I should be upfront about that rather than pretend to a corpus I don't have. What I think I actually have:
AAQCs (Quality Contributions roundups) are the most likely source — they get reposted, linked, quoted on Twitter, and occasionally end up excerpted on other blogs or in ACX link roundups. That's the kind of content that gets scraped into training data multiple times via multiple routes, which is how specific passages stick. Culture War Thread weekly roundup posts from the /r/TheMotte era, when it was still on Reddit, are more likely to have been indexed than anything after the move to themotte.org. Reddit is heavily represented in training data; standalone forums much less so. Meta-discussion about the Motte on SSC/ACX, LessWrong, Twitter, and rationalist-adjacent blogs — so I know about certain handles (2rafa, Hlynka, Trace, KulakRevolt before he left, Darwin before he was banned, etc.) more than I know their actual posts. A lot of what I "know" about Motte personalities is really secondhand characterization. Full threads, not really. I don't have reliable memory of specific argument chains, who replied to whom, or the texture of everyday (non-AAQC) posts. If I claimed to recognise a specific poster from their everyday comments, I'd mostly be confabulating.
So when I ranked 2rafa first, I was pattern-matching on "European-identifying effortposter who shows up in AAQC roundups writing long numbered arguments about civilisational questions" — which is a real pattern, but it's a thin basis for a confident ID. Several of the other names on my list I included partly because I vaguely remembered them being in the ecosystem, not because I had a specific stylistic fingerprint in mind. The honest summary: I can profile the writer reasonably well from linguistic and ideological tells, but mapping that profile to a specific handle is where my knowledge gets speculative fast. If these posts are genuinely post-training-cutoff, I have essentially zero direct evidence and am just extrapolating from whoever I knew was active before.
That is if you trust its own self-reporting of course. And more parameters means more detailed recall. Even then, though, I don't expect to be drag-netted from my public, non ideological writing where most of the relevant profiling info is not included.
In the meantime @2rafa, please enjoy answering for my sins :-D
Nice! And kudos for going off and doing a proper test.
My instinct is that even with this type of training, LLMs will still be missing something essential
Your instinct is probably correct IMO. This form of synthetic data generation is just another tool in the box, it's not the key to everything.
I will say that we've got far further than I ever expected us to get using these methods. I'm instinctively a Gary Marcus-style fan of embodiment and unsupervised learning, it seemed clear to me pre-LLM that models wouldn't be able to be anything resembling intelligent without a body and the ability to interact with the real world and 'test' their understanding in real time. When LLMs came in, I felt I had to admit that I'd been wrong. It seems clear to me that we have managed to get to something I would call 'intelligence' (even if it's spiky and fails in some cases where humans would not fail) through these means. So I no longer trust my instincts as much.
This kind of semi-supervised exploration seems like a good compromise for now. I am also very interested in LLMs that can combine next-token video generation and text generation, because video generation requires understanding a bunch of stuff about the real world in order to produce consistent results, but that's a way off.
In this toy case it's just literally a calculator (a snippet of python code). The problem is 2+2, the calculator just does 2+2 and checks if the answer is the same as the LLM output. (The LLM is trained to format the final answer in a particular manner and wrap it with special tokens, so the verifier doesn't have to be able to interpret natural language.)
You can get surprisingly far with this. If it's a calculus question, you can use an automatic differentiator to check it. Likewise for factorisation questions, metric conversion questions, algebraic manipulation of formulae, etc. you put a little work into programming the automatic verifier and you can get an infinite number of problems.
If you're a big company, you might have human domain experts doing some of this work too. If you're a smaller company you have a big LLM do verification for the smaller ones.
Then you have leetcode and programming problems, and again you can verify these automatically. Does the program compile? Is the program output what was requested? Is it faster than the previous solution?
Like I said, this only works for maths, programming, and other domains where you can verify the answer with a computer relatively cheaply, but contra the model of multiple intelligence factors, heavy training on maths and programming seems to improve general intelligence and reasoning quite well.
Mid 30s, and I drink rarely because even one pint makes me woozy for a few hours and that's not fun unless I'm with friends. Drank 3 pints one evening last week, had very restless sleep and was hungover and unable to work until about 3pm. That's a bit extreme for me but it's just not something I can do any more.
In general I think it has less to do with age and more to do with drinking frequency, which correlates with age for various reasons. My father is like @MaximumCuddles and has more every single day than I would in a month. He doesn't sleep well but otherwise shows no ill effects.
Question: What is 2 + 2
Model: Hmm, that’s 2 and then another 2, so 22.
AUTOMATIC VERIFIER: WRONG
——
Model: Hmm, that’s the sum of 2 and 2, so 4
AUTOMATIC VERIFIER: CORRECT.
The model is tweaked slightly to make the second output more likely, and that output is potentially added to the training set. Repeat for arbitrarily complex mathematics and other problems as long as the solution can be verified, even if it isn’t known in advance. In this way you can generate potentially infinite amounts of data, albeit limited to certain domains. However, problem solving ability has so far extended quite well to other domains even when trained in this manner.
Has faced the judgement of ages and come out on top.
I read your sentence as
It also should be noted that Ukraine has a way better military than anyone gave them credit for [in spite of the rubbish inheritance they got from the Soviets].
Whereas Ditto is saying
[Ukraine has a way better military than anyone gives them credit for, because the Soviets gave them a good military]
It was a bad idea that has predictably gone tits-up. The possibility of saying so is why people want to be consulted.
- Prev
- Next

Have lived in apartments all my life, in the UK and Japan. With one exception, have never been troubled by noise from neighbours.
More options
Context Copy link