site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 334478 results for

domain:rifters.com

I am not interested in debating the object level truth of this topic. I have engaged in such debates previously, and I found the arguments others put forward unpersuasive (as, I assume, they found mine). I'm not trying to convince @self_made_human that he's wrong about LLMs, that would be a waste of both our time. I was trying to point out to him that however much he thinks he is critical of LLMs (and to his credit he did provide receipts to back it up), that is not how his posts come off to observers (or at least, not to me).

Note that I claimed that the support of experts (Geoffrey Hinton is one of the Nobel Prize winners in question) strengthens my case, not that this, by itself, proves that my claim is true, which would actually be a logical fallacy. I took pains to specify that I'm talking about Bayesian evidence.

Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity.

Consider that there's a distinction made between legitimate and illegitimate appeals to authority. Only the latter is a "logical fallacy".

Hinton won the Nobel Prize in Physics, but for the invention of neural networks. I can hardly see someone more qualified to be an expert in the field of AI/ML.

https://en.wikipedia.org/wiki/Argument_from_authority

An argument from authority can be fallacious, particularly when the authority invoked lacks relevant expertise.

This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

It would be, if it wasn't for the veritable mountain of text I've written to explain myself, or the references I've always cited.

consistent in claiming that (contra your interlocutors) they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc. Perhaps this is not what you meant, and I'm not trying to misrepresent you so I apologize if so. But it's how your posts on AI come off, at least to me.

When someone writes something like that, I can only assume they haven’t touched a LLM apart from chatgpt3.5 back in 2022. Have you not used Gemini 2.5 pro? O3? Claude 4 Opus?

LLMs aren’t artificial super intelligence, sure. They can’t reason very well, they make strange logic errors and assumptions, they have problems with context length even today.

And yet, this single piece of software can write poems, draw pictures, write computer programs, translate documents, provide advice on countless subjects, understand images, videos and audio, roleplay as any character in any scenario. All of this to a good enough degree that millions of people use them every single day, myself included.

I’ve basically stopped directly using Google search and switched to Gemini as the middle man - the search grounding feature is very good, and you can always check its source. For programming, hallucination isn’t an issue when you can couple it with a linter or make it see the output of a program and correct itself. I wouldn’t trust it on its own and you have to know its limitations, but properly supervised, it’s an amazingly capable assistant.

Sure, you can craft a convincing technical argument on how they’re just stochastic parrots, or find well credentialed people saying how they just regurgitate their training data and are theoretically incapable of creating any new output. You can pull a Gary Marcus and come up with new gotchas and make the LLMs say blatant nonsense in response to specific prompts. Eppur si muove.

No, and if those posts had been left at +1,0 I would not have said a word.

This is solely about the negative reinforcement on unobjectionable comments that merely have an unpopular opinion. The people who downvote those are doing this forum wrong. I will die on this hill.

I apologize for not responding to the rest of the post, but I wanted to zero in on what seems to be a disagreement of fact rather than a disagreement of opinion.

Ergo, LLMs might be conscious. I also always add the caveat that if they are, they are almost certainly an incredibly alien form of consciousness and likely to have very different qualia.

This would seem to indicate that you already disagree with the illusionists. Illusionists believe that nothing is conscious, and nothing ever will be conscious, because consciousness does not exist. Therefore, you hold a philosophical view (that illusionism is false).

Earlier in the thread you said:

I have a strong conviction that objective morality does not exist.

This is itself a philosophical view. There are philosophers who do believe that objective morality exists. So, it appears that you believe that your own claim is true, and their claims are false.

You previously claimed that Searle's Chinese Room does know how to speak Chinese. So you think Searle's claim that the room doesn't know how to speak Chinese is false. And you think that your own view is true.

In this post you claimed that GPT-4 had a genuine understanding of truth, and that p-zombies are an incoherent concept, both philosophical claims.

So you have a long history of making many philosophical claims. You appear to assert these claims because you believe that they are correct, because they correspond to the facts of reality; so it naturally seems to follow that you think that anyone who denies these claims would be saying something incorrect, and opposed to the facts of reality. I don't see how the concept of a "category error" enters anywhere into it. So "The only way a philosophical conjecture can be incorrect is through logical error in its formulation, or outright self-contradiction" is false. They can be incorrect because they fail to correspond to the facts of reality.

Unless you want to claim "there isn't even such a thing as a philosophical problem, because all of my beliefs are so obviously correct that any reasonable person would have to share all my beliefs, and all the opposing claims are so radically wrong that they're category errors", which is... basically just a particularly aggressive phrasing of the old "all my beliefs are obviously right and all my opponents' beliefs are obviously wrong" thing, although it would still fundamentally be in line with my original point.

The point is that you can't escape from philosophy, you're engaging in it all the time whether you realize it or not (in fact the two of us engaged in a protracted philosophical argument in that final linked post).

Making sure that they're not killing more people than the assisted suicide law allows is actually important; if they have no way to make sure, they shouldn't be doing it at all.

Suppose that you are a Swiss marriage registrar, and that Switzerland does not want to facilitate marriages where one or both partners a coerced into marrying. There are approaches with very different costs to filter these out. You could just keep a lookout for people who look unhappy or nervous. You could have a separate private chats with both the groom and the bride and mention that there are ways out for people who are coerced. You could require both of them to separately talk to a psychologist for an hour. You could require both to undergo psychotherapy for a year. You could just declare defeat and refuse to marry anyone, because it is not possible to know what motivations people have for sure.

In reality, you will probably not do that last thing generally -- even if you are fine with not having marriages, the same argument would also extend to employment contracts, loans, purchases, sex, etc. Or few people would argue that as you are quite likely to be able to smuggle a few grams of cocaine in a truck without it getting detected by customs, we either should abolish customs or stop international trade.

The assisted suicide case here was not even a matter of consent. But I will be sure that sooner or later, a case where consent is violated will appear. The chance that the evil family of some rich guy will kidnap their beloved pet and threaten to torture it horribly unless they opt for MAID is low, but not zero.

There is a conversion factor for violating the autonomy of those who would really want to live to violating the autonomy of those who really want to die. We probably disagree about the magnitude. From a utilitarian standpoint, I think that we should not minimize the suffering of those denied MAID.

Suppose a djinn offered you to prolong your life by a decade. If you accept, they will flip a coin. Heads, you get to live in the 98th percentile of happiness. Tails, you get to live in the second percentile of happiness (for your age cohort), with no way out. They also reveal that you will be 70 at the time your extra decade starts.

Personally, my answer would be fuck no. Sure, that decade in the 98th percentile would be sweet -- travelling, having sex with a great partner, enjoying life without being trapped in the rat race, playing with your grandkids. But the horror of the 2nd percentile would be much greater. You body failing, your mind fogging -- but not to the point where you do not notice any more, without social contacts, getting bedsores in some retirement home, in constant pain, waiting for a death which will not come for a decade.

In reality, we are not subject to the veil of ignorance imposed by the djinn. We can just ask the 70yo's what their quality of life is and if they want to die or not, and we will mostly get accurate answers. Nobody suggests randomly murdering elderly in the hope that they might welcome death.

So the next djinn offers their deal, which is the same as before, only you have a way to die before the decade is over -- say by stating your wish to die on seven subsequent days. They warn you that it is possible that someone will pressure you into taking that option even if you are in the happy branch.

This seems like a great deal to me. Sure, I lose some utility in the happy branch, but I also reduce the suffering in the pain branch by a factor of 5000.

The answer to this is "only take patients from places where they can legally get documents", not "stop asking for documents".

Luckily, this is not how liberal governments deal with foreigners whose governments are uncooperative. If you are a refugee from Iran, and the regime hates you and will not give you any ID documents, then a reasonable country would recognize your plight and try to work around it, not just ship you back to Iran because without ID you can not stay legally.

The Swiss people (or their representatives) have decided that humans in Switzerland should have a right to assisted suicide. Why should they deny this to foreigners just because their backwards government is uncooperative?

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit.

Note that I'm not saying you are not arguing from your credentials. But rather, you are arguing based on the credentials of others with the statement "In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires". Nobel Prize winners do have credibility (albeit not necessarily outside their domain of expertise), but that isn't a decisive argument because of the fallacy angle.

Even so, I think that calling it a logical fallacy is incorrect...

This is, to be blunt, quite wrong. Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity. Authorities can be wrong, just like anyone else. This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with.

I simply think it's funny. If it doesn't strike you as humorous that your statement would be agreed upon by all (just with different claims as to who has the bad takes), then we just don't share a similar sense of humor. No big deal.

Do the pro-gun comments in the thread meet your standard?

Like quoting 4-chan to say-but-not-say someone's argument is retarded? +30,-2 btw (charitably, just quoting it because it's the best explanation they could find, but like .. you could see how that would be massively downvoted if it were an anti-gun rant instead)

What, you think people don't know when they are being sneered at?

I think the most likely explanation is that our readership is doing opinion war when it comes to an issue they really care about, and that's bad.

I think the most likely explanation is that you're upset that you can't convince anyone at the object level, so you're resorting to shaming over meta-level concerns.

Considering it's Ellis, I wasn't sure if that mattered.

On reflection, probably not.

I cannot recommend The Secret History highly enough, incidentally.

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit. Even so, I think that calling it a logical fallacy is incorrect, because at the very least it's Bayesian evidence. If someone shows up and starts claiming that all the actual physicists are ignoring them, well, I know which side is likely correct.

I have certainly, in the past or present, shared detailed arguments.

https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/353975?context=8#context

Think of it as having the world's worst long-term memory. It's a total genius, but you have to re-introduce yourself and explain the whole situation from scratch every single time you talk to it

https://www.themotte.org/post/2272/is-your-ai-assistant-smarter-than/349731?context=8#context

I've already linked to an explainer of why it struggles above, the same link regarding the arithmetic woes. LLM vision sucks. They weren't designed for that task, and performance on a lot of previously difficult problems, like ARC-AGI, improves dramatically when the information is restructured to better suit their needs

https://www.themotte.org/post/2254/culture-war-roundup-for-the-week/346098?context=8#context

I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.

https://www.themotte.org/post/1754/culture-war-roundup-for-the-week/309571?context=8#context

The good news:

It works.

The bad news:

It doesn't work very well.

Abysmal taste by default, compared to dedicated image models. Base Stable Diffusion 1.0 could do better in terms of aesthetics, Midjourney today has to be reined in from making people perfect.

https://www.themotte.org/post/1741/culture-war-roundup-for-the-week/307961?context=8#context

It isn't perfect, but you're looking at a failure rate of 5-10% as opposed to >80% when using DALLE or Flux. It doesn't beat Midjourney on aesthetics, but we'll get there.

I give up. I have too many comments about LLMs for me to go through them all. But I have, in short, said:

  • LLMs are fallible. They hallucinate.

  • They are sycophantic.

  • They aren't great at poetry (they do fine now, but nothing amazing)

  • Their vision system sucks

  • Their spatial reasoning can be sketchy

  • You should always double check anything that is mission critical while using them.

they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc

These two statements are not inconsistent. Hallucinations exist, but can mitigated. They do perform a whole host of tasks well, otherwise I wouldn't be using them for said tasks. If they're not reasoning while winning the IMO, I have to wonder if the people claiming otherwise are reasoning themselves.

Note that I usually speak up in favor of LLMs when people make pig-headed claims about their capabilities or lack thereof. I do not see many people claiming that modern LLMs are ASIs or can cure cancer, and if they said such a thing, I'd argue with them too. The assymetry of misinformation is, as far as I can tell, not my fault.

Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with. I do not claim to be an expert, the more domain expertise they tend to have, the more they tend to align with my claims. More importantly, I always have receipts at hand.

I assume prep billboards are funded by grants to ‘raise awareness’ and ‘destigmatize’ and have little or nothing to do with the people who view them.

We’ve got a base there, so we’re renting the islands back again from their ‘rightful owners’. Nothing will actually change, Mauritius will just have lots of our tax money now.

Which is exactly the issue: many men do want relationships to form through the same process as friendship. Something organic where both people naturally recognize the value of the other person.

Let me rephrase.

What I learned in that phase is that -- like you say -- attraction is something that you need to cross as the "first hurdle."

But my argument would be that men do the same to women: it's just that men are more visual than women, and it's not at all hard to create a vague spark of attraction in a man. I don't think I'm saying anything you don't already know -- if I read your post right, that's what you're arguing.

That said, I absolutely have had relationships form through the same process as friendship. It's just that the friendship began with us both having at least a mild attraction for the other. The friendship served as a soft courtship. But I absolutely believe that every time this was the case, a relationship could have started much sooner. But I liked how it went down; like you, I take no pleasure in the initial stages of dating.

Sometimes this happened because I was in a relationship at the time, but drew the attention of someone else (this has happened exactly once, let me not exaggerate), sometimes it happened because I wasn't sure of whether I felt like dating, sometimes it happened because I was literally an oblivious idiot and I didn't know what I'd done and I spent 4 months of high school thinking my crush didn't like me when she wanted me to grab her and kiss her.

But, on that note: I also 'won' the attraction by being, in some way, performative and high status.

Birds build nests to attract lady birds (insert LBJ joke here), fish build a wonderful habitat to attract lady fish, peacocks look like a color television advertisement to attract lady peacocks (or just put extended editions of The Office on the platform)... it just is the case that, in most sexually dimorphic species, males attract females by demonstrating high status in some way. I don't have any complaints about the reality of it; it is what it is, and none of woman born controls it or chose it. However people would like it to happen, that's how it happens.

But for me, it absolutely happened organically.

I would argue strongly that I'm less attractive than you -- I don't care if I set my height to 6'7", I wouldn't get the kind of attention you're describing on dating apps. That said, short men have a really rough time, and it sucks that you've struggled because of a baseball statistic. While I have maybe once or twice been asked out by a man, I strongly doubt that gay men would consider me a catch. I can't confirm that -- I'm from the bible belt, gay men don't exactly ask out strangers on the street.

But I have a secret weapon.

I love public speaking. I absolutely love it. And when I'm in a meeting, or discussion, about something I find interesting, I can command attention.

Now, be careful what you take from that. I am the world's worst smalltalker. I hate calling people on the phone. I will avoid talking to shopkeepers if I can. I feel anxious just thinking about introducing myself to a new person. Sometimes I'm so lost in thought that I don't hear what people are saying to me, and I'll just respond with whatever I think will move the conversation along. My friends and I once played a party game where we had to imitate a randomly-picked member of our friend group, and someone imitated me by sitting, silently, with his hands clasped in his lap. That's me. When I'm not speaking, you might confuse me for a piece of furniture.

But if you say, "hey, urquan, create a presentation on the economic problems of socialism in the USSR", boy am I already excited. I'm already thinking about all the strange memes and fun analogies I can use to explain Stalin's effort to rapidly industrialize. And I'm thinking about how I might be able to make people chuckle, and remember the presentation despite the dry concept.

When I held an officer position in a club in college, I used that to springboard a few fun lectures on relevant topics I felt like sharing. I don't think most of the other members loved it, but I don't care. I did it for me. I liked it. I was good at it.

And do you know when I met my girlfriend? She came to one of these lectures. She came up afterwards, started talking to me, and wouldn't let me out of her sight until she got my number. This is by far the most interested in me a human being has ever been -- male or female. And her own recollection of the event, she told me later, is, "I saw you, and I knew I had to have you in my life." How's that for crossing the attraction barrier!

I'm not Terrance Tao. I'm Rain Man. I have some special abilities that can be quite attractive, to Miss Right, but it's not something I do with intention or structure. It's something that's only mildly under my control. And I have a lot of deficits -- I don't think anyone should be envying my social charm!

There was a motte post a long time ago that replied to people talking about social competition among women; you know, sorority girls, mean girls, female bullying in school, all that kind of stuff. And I loved the comment and have tried to find it many times, without success. It went something like this: "The women I've generally been friends with or dated have been rejects from that culture of competition. And I've seen the scars that competition has made on them."

I thought that was very wise. The women I've dated have universally not been "sorority girl" types. They're not the hot girls out there doing hot girl summer. They've just been average, kind of quirky, intelligent, and warm people. I can't say a bad thing about them. I feel like I found the crown of France in the gutter. "A good wife who can find?"

Can someone explain to me the chain of events that led to the UK paying to get rid of the chagos islands like it’s a tree trunk or something? I understand Starmer wants to be rid of them for reasons that are stupid but why is he paying to do so.

I'm not a fan of the downvote brigade, and I didn't and wouldn't consider any of those downvote worthy, but I don't think they're particularly good comments, either.

(maybe excluding Corvos' last one? It's still an argument-by-definition, but at least it's trying to engage, where aldomilyar's ipse dixiting and wanderer's just kinda making counterfactual claims on pure vibes.)

I mean, you're not alone but neither are the people who argue against you. That is hardly a compelling argument either way. Pointing to the credentials of those who argue with you is a better argument (though... "being a billionaire" is not a valid credential here), but still not decisive. Appeal to authority is a fallacy for a reason, after all. Moreover, though I'm not well versed in the state of the debate raging across the CS field, so I don't have tabs on who is of what position, I have no doubt whatsoever that there are equally-credentialed people who take the opposite side from you. It is, after all, an ongoing debate and not a settled matter.

Also, frankly I agree with @SkoomaDentist that you are uncritical of LLMs. I've never seen you argue anything except full on hype about their capabilities. Perhaps I've missed something (I'm only human after all, and I don't see every post), but your arguments are very consistent in claiming that (contra your interlocutors) they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc. Perhaps this is not what you meant, and I'm not trying to misrepresent you so I apologize if so. But it's how your posts on AI come off, at least to me.

Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.

Fair points, but verification is usually way cheaper than generation. If one actual human PhD can monitor a dozen AI agents, it is plausible that the value prop makes sense.

In a lot of tasks, including AI research and coding, you can also automate the verification process. Does the code compile and pass all tests? Does the new proposed optimizer beat Adam or Muon on a toy run?

There is probably perfectly adequate shareholder value in getting a billion lonely midwits to pay $10/month rising to $inf/month in the way of all silicon valley service models, and keeping them hooked with the LLM equivalent of tokenized language loot boxes. I'd wager its even the more significant hill to climb for shareholder value.

That might be true today (and tomorrow, or next year), but the companies are betting hard on their models being capable of doing much more, and hence getting paying customers willing to shell out more. The true goal is recursive self-improvement, and the belief that this has far more dollars associated with it than even capturing all the money on earth today. Of course, they need market share and ongoing revenue to justify the investments to get there, which is why you can buy in relatively cheap. Competition also keeps them mostly honest, OAI would probably be charging a great deal more or gatekeeping their best if Google or Anthropic weren't around.

A lot of the heavily downvoted comments in that thread are not rhetorically spicy. Must I? Fine..

I think the most likely explanation is that our readership is doing opinion war when it comes to an issue they really care about, and that's bad. I picture Motte-Jesus storming this temple, flipping tables screaming "Stop turning my Father's house into an echo chamber!"

Anyway, it's all exactly as you describe. Some people do just want to endlessly polish for its own sake. That's what they like to do. And that's ok with me. You get the same thing in STEM too. Mathematicians working on God knows what kinds of theories related to affine abelian varieties over 3-dual functor categories or whatever. None of it will ever be "useful" to anyone. But their work is quite fascinating nonetheless, so I'm happy that they're able to continue on with it in peace.

Isn't it a massive meme (based in fact) that even the most pure and apparently useless theoretical mathematics ends up having practical utility?

Hell, it even has a name: "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"

Just a few examples, since you probably know more than I do:

  • Number theory to modern cryptography

  • Non-Euclidean geometry was considered largely a curiosity till Einstein came along.

  • Group theory and particle physics

So even if the mathematicians themselves want to claim their work is just rolling in Platonic hay for the love of the game, well, I'll smile and wait. It's not like it's expensive either, you can run a maths department on roughly the budget for food, chalk and chalkboards.

(It's amazing how cheap they are, and how more of them don't just run off to a quant firm. Almost makes you believe that they genuinely love maths)

I'm a bit confused here. I believe you've claimed before that a) first-person consciousness does exist, and b) sufficiently advanced AI will be conscious. Correct me if I'm wrong here. You asserted these claims because you think they're true, yes? And so anyone who denies these claims is saying something false?

Have I? I'm pretty sure that's not the case.

The closest I can recall going is:

  • We do not have a complete mechanistic model of consciousness in humans

  • We do not know what the minimal requirements of consciousness even are in the first place

  • I have no robust way of knowing if other humans are conscious. I'm not an actual solipsist, because I think the odds are pretty damn solid (human brains are really similar), but it is not actually a certainty.

  • Ergo, LLMs might be conscious. I also always add the caveat that if they are, they are almost certainly an incredibly alien form of consciousness and likely to have very different qualia.

In a sense, I see the question of consciousness as irrelevant when it comes to AI. I really don't care! If an ASI tells me it's conscious, then I'll just shrug and go about my day. What I care far more about is what an ASI can achieve.

(If GPT-5 tells me it's conscious, I'd say, great, now where is that chart I asked for?)

In the early 20th century you had New Criticism, and people criticized that for being overly formalist and ignoring social and political context, so then you had everything that goes under the banner of "postmodernism", ideology critique, historicism, all that sort of stuff, and then you had some people who said that the postmodernist stuff was leading us astray and we had gotten too far from the texts themselves and how they're actually received, so they got into "postcritique" and reader response theory, and on and on it goes...

It looks to me like less of a crisis rather than business as usual. What I see is a series of cyclical fads going in and out of fashion, and no real consistency or convergence.

How many layers of rebuttal and counter-rebuttal must we go before a lasting consensus is achieved? I expect most literary academics would say that the self-licking nature of the ice cream cone is the point.

Contrast with STEM: If someone proves that the axiom of choice is, strictly speaking, unnecessary, that would cause a revolution. Even if such a fundamental change doesn't happen, the field will make steady improvements.

I'd be happy if you could direct me to any of these novel and esoteric readings.

Uh.. This really isn't my strong suit, but I believe that the queer theoretical interpretation of Moby Dick or the post-colonial reading of The Tempest might apply.

I do not think Shakespeare intended to say much on the topic of colonial politics. I can grant that sailors are hella gay, so maybe the critical queers have something useful to say.

Well, that's something that psychoanalysis actually does take a theoretical stance on. You can't trust the patient about what the problem is. Frequently, what they first complain about is not the root cause of what's actually going on. It might be. But frequently it's not. Any "shared understanding" after a one week period of consultation is illusory, because people fundamentally do not understand themselves.

I don't think you really need psychoanalysis to get there. Depressed people are often known to not acknowledge their depression. I've never felt tempted to bring out a psychoanalysis textbook to solve such problems, I study them because I'm forced to, for exams set by sadists.

Did you read Less Than Zero before Imperial Bedrooms?

I did not. Considering it's Ellis, I wasn't sure if that mattered. I'll probably re-read IB after I read Less than Zero.

I finished Lunar Park and thought it was generally good.

The Rules of Attraction, incidentally, is set at the same college as Donna Tartt's The Secret History (much-beloved in these parts).

A bit of trivia I knew for some reason. I haven't read The Secret History (I probably should), but I've read a plot synopsis, and was amused that the two books sound like they could be taking place in the same fictional universe.

You’re right, I wasn’t really thinking about extracting max value from limited compute.

The same rhetorical flourishes that would go overlooked on posts in favour of the prevailing view? I don't buy it.

They'd likely be downvoted, just by different people.

A downvote is not a bullet. It's more like a middle finger, or a scowl, or an eye-roll, but that's enough. It's enough to say "we don't want you here. go away", and that's my point. It's against the spirit of this forum. It is politics and tribalism above the pursuit of truth.

All I'm seeing is crying about rhetorically dishing it out but not being willing to take even the most minor pushback.

Well the Arab world maintains its own massive charity-lobbying-propaganda industrial complex that works to keep that alive, but I think the historical outline I have is the reason that sympathy is around to exploit.

The total salary of all therapists is surely far higher than the combined salary of all nuclear engineers?

Almost certainly true, and my analogy is imperfect.

In the limit, AI are postulated to be capable of doing {everything humans can do}, physically or mentally.

But AI companies today are heavily compute constrained, they're begging Nvidia to sell them more GPUs, even at ridiculous costs.

That means that they want to extract maximum $/flop. This means that they'd much rather automate high value knowledge work first. AI researchers make hundreds of thousands or even millions/almost billions of USD a year, if you have a model that is as smart as an AI researcher, then you can capture some of that revenue.

Once those extremely high yield targets are out of the way, then you can start creeping down the value chain. The cost of electricity for ChatGPT is less than that hourly fee most therapists charge.

Of course, I must hasten to add that this is an ideal scenario. The models aren't good enough to outright replace the best AI researchers, maybe not even the median or subpar. If the only job they can do is that which demands the intelligence of a therapist, then they'll have to settle for that.

(And of course, the specter of recursive self improvement. Each AI AI researcher or programmer can plausibly speed up the iteration time till an even better researcher or coder. This may or may not be happening today.)

In other words, yhere are competing pressures:

  • Revenue and market share today. Hence free or $20 plans for the masses.

  • A push to sell more expensive plans or access to better models to those willing to pay for them.

  • Severe compute constraints, meaning that optimizing revenue on the margin is important.