site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 9452 results for

domain:badcyber.com

Anyway, it's all exactly as you describe. Some people do just want to endlessly polish for its own sake. That's what they like to do. And that's ok with me. You get the same thing in STEM too. Mathematicians working on God knows what kinds of theories related to affine abelian varieties over 3-dual functor categories or whatever. None of it will ever be "useful" to anyone. But their work is quite fascinating nonetheless, so I'm happy that they're able to continue on with it in peace.

Isn't it a massive meme (based in fact) that even the most pure and apparently useless theoretical mathematics ends up having practical utility?

Hell, it even has a name: "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"

Just a few examples, since you probably know more than I do:

  • Number theory to modern cryptography

  • Non-Euclidean geometry was considered largely a curiosity till Einstein came along.

  • Group theory and particle physics

So even if the mathematicians themselves want to claim their work is just rolling in Platonic hay for the love of the game, well, I'll smile and wait. It's not like it's expensive either, you can run a maths department on roughly the budget for food, chalk and chalkboards.

(It's amazing how cheap they are, and how more of them don't just run off to a quant firm. Almost makes you believe that they genuinely love maths)

I'm a bit confused here. I believe you've claimed before that a) first-person consciousness does exist, and b) sufficiently advanced AI will be conscious. Correct me if I'm wrong here. You asserted these claims because you think they're true, yes? And so anyone who denies these claims is saying something false?

Have I? I'm pretty sure that's not the case.

The closest I can recall going is:

  • We do not have a complete mechanistic model of consciousness in humans

  • We do not know what the minimal requirements of consciousness even are in the first place

  • I have no robust way of knowing if other humans are conscious. I'm not an actual solipsist, because I think the odds are pretty damn solid (human brains are really similar), but it is not actually a certainty.

  • Ergo, LLMs might be conscious. I also always add the caveat that if they are, they are almost certainly an incredibly alien form of consciousness and likely to have very different qualia.

In a sense, I see the question of consciousness as irrelevant when it comes to AI. I really don't care! If an ASI tells me it's conscious, then I'll just shrug and go about my day. What I care far more about is what an ASI can achieve.

(If GPT-5 tells me it's conscious, I'd say, great, now where is that chart I asked for?)

In the early 20th century you had New Criticism, and people criticized that for being overly formalist and ignoring social and political context, so then you had everything that goes under the banner of "postmodernism", ideology critique, historicism, all that sort of stuff, and then you had some people who said that the postmodernist stuff was leading us astray and we had gotten too far from the texts themselves and how they're actually received, so they got into "postcritique" and reader response theory, and on and on it goes...

It looks to me like less of a crisis rather than business as usual. What I see is a series of cyclical fads going in and out of fashion, and no real consistency or convergence.

How many layers of rebuttal and counter-rebuttal must we go before a lasting consensus is achieved? I expect most literary academics would say that the self-licking nature of the ice cream cone is the point.

Contrast with STEM: If someone proves that the axiom of choice is, strictly speaking, unnecessary, that would cause a revolution. Even if such a fundamental change doesn't happen, the field will make steady improvements.

I'd be happy if you could direct me to any of these novel and esoteric readings.

Uh.. This really isn't my strong suit, but I believe that the queer theoretical interpretation of Moby Dick or the post-colonial reading of The Tempest might apply.

I do not think Shakespeare intended to say much on the topic of colonial politics. I can grant that sailors are hella gay, so maybe the critical queers have something useful to say.

Well, that's something that psychoanalysis actually does take a theoretical stance on. You can't trust the patient about what the problem is. Frequently, what they first complain about is not the root cause of what's actually going on. It might be. But frequently it's not. Any "shared understanding" after a one week period of consultation is illusory, because people fundamentally do not understand themselves.

I don't think you really need psychoanalysis to get there. Depressed people are often known to not acknowledge their depression. I've never felt tempted to bring out a psychoanalysis textbook to solve such problems, I study them because I'm forced to, for exams set by sadists.

Did you read Less Than Zero before Imperial Bedrooms?

I did not. Considering it's Ellis, I wasn't sure if that mattered. I'll probably re-read IB after I read Less than Zero.

I finished Lunar Park and thought it was generally good.

The Rules of Attraction, incidentally, is set at the same college as Donna Tartt's The Secret History (much-beloved in these parts).

A bit of trivia I knew for some reason. I haven't read The Secret History (I probably should), but I've read a plot synopsis, and was amused that the two books sound like they could be taking place in the same fictional universe.

You’re right, I wasn’t really thinking about extracting max value from limited compute.

The same rhetorical flourishes that would go overlooked on posts in favour of the prevailing view? I don't buy it.

They'd likely be downvoted, just by different people.

A downvote is not a bullet. It's more like a middle finger, or a scowl, or an eye-roll, but that's enough. It's enough to say "we don't want you here. go away", and that's my point. It's against the spirit of this forum. It is politics and tribalism above the pursuit of truth.

All I'm seeing is crying about rhetorically dishing it out but not being willing to take even the most minor pushback.

Well the Arab world maintains its own massive charity-lobbying-propaganda industrial complex that works to keep that alive, but I think the historical outline I have is the reason that sympathy is around to exploit.

The total salary of all therapists is surely far higher than the combined salary of all nuclear engineers?

Almost certainly true, and my analogy is imperfect.

In the limit, AI are postulated to be capable of doing {everything humans can do}, physically or mentally.

But AI companies today are heavily compute constrained, they're begging Nvidia to sell them more GPUs, even at ridiculous costs.

That means that they want to extract maximum $/flop. This means that they'd much rather automate high value knowledge work first. AI researchers make hundreds of thousands or even millions/almost billions of USD a year, if you have a model that is as smart as an AI researcher, then you can capture some of that revenue.

Once those extremely high yield targets are out of the way, then you can start creeping down the value chain. The cost of electricity for ChatGPT is less than that hourly fee most therapists charge.

Of course, I must hasten to add that this is an ideal scenario. The models aren't good enough to outright replace the best AI researchers, maybe not even the median or subpar. If the only job they can do is that which demands the intelligence of a therapist, then they'll have to settle for that.

(And of course, the specter of recursive self improvement. Each AI AI researcher or programmer can plausibly speed up the iteration time till an even better researcher or coder. This may or may not be happening today.)

In other words, yhere are competing pressures:

  • Revenue and market share today. Hence free or $20 plans for the masses.

  • A push to sell more expensive plans or access to better models to those willing to pay for them.

  • Severe compute constraints, meaning that optimizing revenue on the margin is important.

The same rhetorical flourishes that would go overlooked on posts in favour of the prevailing view? I don't buy it.

A downvote is not a bullet. It's more like a middle finger, or a scowl, or an eye-roll, but that's enough. It's enough to say "we don't want you here. go away", and that's my point. It's against the spirit of this forum. It is politics and tribalism above the pursuit of truth.

...And for the lovely anecdote I mentioned, from Nancy McWilliams's Psychoanalytic Diagnosis:

Thirty-five years ago I treated a man for severe obsessions and compulsions. Today I might send him for concurrent exposure therapy and possibly medication; at the time, those treatments had not been developed. He was an engineering student from India, lost and homesick in an alien environment. In India, deference to authority is a powerfully reinforced norm, and in engineering, compulsivity is adaptive and rewarded. But even by the standards of these comparatively obsessive and compulsive reference groups his ruminations and rituals were excessive, and he wanted me to tell him definitively how to stop them. When I reframed the task as understanding the feelings behind his preoccupations, he was visibly dismayed. I suggested that he might be disappointed that my way of formulating the problem did not permit a quick, authoritative solution. "Oh, no!" he insisted; he was sure I knew best, and he had only positive reactions to me.

The following week he came in asking how "scientific" the discipline of psychotherapy is. "Is it like physics or chemistry, an exact science?" he wanted to know. No, I replied, it is not so exact and has many aspects of an art. "I see," he pondered, frowning. I then asked if it troubled him that there is not more scientific accuracy in my field. "Oh, no!" he insisted, absentmindedly straightening up the papers on the end of my desk. Did the disorder in my office bother him? "Oh, no!" In fact, he added, it is probably evidence that I have a creative mind. He spent our third session educating me about how different things are in India, and wondering abstractedly about how a psychiatrist from his country might work with him. Did he sometimes wish I knew more about his culture, or that he could see an Indian therapist? "Oh, no!" He is very satisfied with me.

His was, by clinic policy, an eight-session treatment. By our last meeting, I had succeeded, mostly by gentle teasing, in getting him to admit to being occasionally a little irritated with me and with therapy (not angry, not even aggravated, just slightly bothered, he carefully noted). I thought that the treatment had been largely a failure, though I had not expected to accomplish much in eight meetings. But 2 years later he came back to tell me that he had thought a lot about feelings since he had seen me, particularly about his anger and sadness at being so far from his native country. As he had let in those emotions, his obsession and compulsions had waned. In a manner typical of people in this clinical group, he had found a way to feel that he was in control of pursuing insights that came up in therapy, and this subjective autonomy was supporting his self-esteem.

Countertransference with obsessional clients often includes an annoyed impatience, with wishes to shake them, to get them to be open about ordinary feelings, to give them a verbal enema or insist that they "shit or get off the pot." Their combination of excessive conscious submission and powerful unconscious defiance can be maddening. Therapists who have no personal inclination to regard affect as evidence of weakness or lack of discipline are mystified by the obsessional person's shame about most emotions and resistance to admitting them. ...

You accuse me of engaging in philosophy, and I can only plead guilty. But I suspect we are talking about two different things. I see a distinction between what we might call instrumental versus terminal philosophy. I use philosophy as a spade, a tool to dig into reality-anchored problems like the nature of consciousness or my ethical obligations to a patient. The goal is to get somewhere. For many professional philosophers I have encountered, philosophy is not a tool to be used but an object to be endlessly polished. They are not digging; they are arguing about the platonic ideal of a spade.

Dear Lord what a beautiful illustration of Jung's dichotomy between extroverted thinking and introverted thinking. Textbook. I'm practically giddy over here.

Anyway, it's all exactly as you describe. Some people do just want to endlessly polish for its own sake. That's what they like to do. And that's ok with me. You get the same thing in STEM too. Mathematicians working on God knows what kinds of theories related to affine abelian varieties over 3-dual functor categories or whatever. None of it will ever be "useful" to anyone. But their work is quite fascinating nonetheless, so I'm happy that they're able to continue on with it in peace.

I must strongly disagree, this doesn't represent my stance at all. In fact, I would say that this is a category error. The only way a philosophical conjecture can be "incorrect" is through logical error in its formulation, or outright self-contradiction.

I'm a bit confused here. I believe you've claimed before that a) first-person consciousness does exist, and b) sufficiently advanced AI will be conscious. Correct me if I'm wrong here. You asserted these claims because you think they're true, yes? And so anyone who denies these claims is saying something false?

These claims (that first-person consciousness does exist, and that sufficiently advanced AI will be conscious) are philosophical claims. There are philosophers who deny one or both of them. Presumably you don't think they're making a "category error", you just think they're saying something false.

For every scholar doing a careful statistical analysis, how many are writing another unfalsifiable post-structuralist critique by doing the equivalent of scrutinizing a takeout menu?

Of course, there's a lot of indefensible crap out there. But 90% of everything is crap. I simply defend the parts that are defensible and ignore the parts that are indefensible.

It is designed to accumulate "perspectives," not to converge on truth.

That's a relatively accurate statement!

Some people just want to get things done. Some people just want to sit back and take a new perspective on things. Nature produces both types with regularity. Let us appreciate the beautiful diversity of types among the human race, yes?

I do not see an equivalent "interpretive crisis" in literary studies.

That's because you haven't been looking. There's basically never not an interpretive crisis going on in literary studies.

In the early 20th century you had New Criticism, and people criticized that for being overly formalist and ignoring social and political context, so then you had everything that goes under the banner of "postmodernism", ideology critique, historicism, all that sort of stuff, and then you had some people who said that the postmodernist stuff was leading us astray and we had gotten too far from the texts themselves and how they're actually received, so they got into "postcritique" and reader response theory, and on and on it goes...

In general, people outside of the humanities underestimate the degree of internal philosophical disagreement within the humanities. Here's an hour long podcast of Walter Benn Michaels talking about the controversy engendered by his infamous paper "Against Theory", if you're interested.

The incentive is to produce a novel interpretation, the more contrarian the better. This creates a centrifugal force, pushing the field away from stable consensus and towards ever more esoteric readings.

I'd be happy if you could direct me to any of these novel and esoteric readings. My impression is that the direction of force is the opposite, and that readings tend to be conservative because agreeing with your peers and mentors is how you get promoted (conservative in the sense of adhering to institutional trends, not conservative in the political sense).

In most cases, patients come to us because they believe they have a problem. We usually agree. That shared understanding of a problem in need of a solution is anchor enough.

Well, that's something that psychoanalysis actually does take a theoretical stance on. You can't trust the patient about what the problem is. Frequently, what they first complain about is not the root cause of what's actually going on. It might be. But frequently it's not. Any "shared understanding" after a one week period of consultation is illusory, because people fundamentally do not understand themselves. (I will relay a lovely anecdote about such a case in a reply to this comment, so as not to overly elongate the current post.)

This is why I believe the humanities are not a good target for limited public funds, at least at present.

I suppose that's where the rub always lies, isn't it. Well, you're getting your wish, since humanities departments are shuttering at an unprecedented rate. I fully agree that there is no "utilitarian" argument for why much of this work should continue. All I can do is try to communicate my own "perspective" (heh) on how I see value in this work, and hope that other people choose to share in that perspective.

This is still my benchmark for what serious AI research should be thinking about:

https://www.anthropic.com/research/claude-character

Little has changed in 2 decades

I have considered it, and found that hypothesis lacking. Perhaps it would be helpful if you advanced an argument in your favor that isn't just "hmm.. did you consider you could be wrong?"

Buddy, to put it bluntly, if I believed I was wrong then I would adjust in the direction of being... less wrong?

Also, have you noticed that I'm hardly alone? I have no formal credentials to lean on, I just read research papers in my free time and think about things on a slightly more than superficial level. While we have topics of disagreement, I can count several people like @rae, @DaseindustriesLtd, @SnapDragon, @faul_sname or @RandomRanger in my corner. That's just people who hang around here. In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires.

To think that I'm uncritical of LLMs? A man could weep. I've written dozens of pages about the issues with LLMs. I only strive to be a fair critic. If you have actual arguments, I will hear them.

LLM companies are desperately fighting to move up the value chain, they all want to sell their models as equivalent in performance to PhD candidates, or independent agents capable of doing high value knowledge work.

I donno man. How much value is there really here? Unless you just let'r rip and see what happens, all those LLMs doing PhD level knowledge work will need to be overseen still by PhD level knowledge workers to check for veracity and hallucinations. It runs into a bit of the "How does a stupid writer depict a smart villain" problem.

And as for the companies that decide let'r rip without adequate oversight, well... I can't venture to guess. Really playing with fire there.

There is probably perfectly adequate shareholder value in getting a billion lonely midwits to pay $10/month rising to $inf/month in the way of all silicon valley service models, and keeping them hooked with the LLM equivalent of tokenized language loot boxes. I'd wager its even the more significant hill to climb for shareholder value.

Why is this comment +10,-16 for merely making an argument?

Perhaps the rhetorical flourish at the end?

Or this one? +10,-12

Perhaps the jeering paragraph objecting to "fun" being a reason for things to be legal, or the tiresome cars/guns comparison?

Bad argument gets counterargument. Does not get bullet. Does not even get small meaningless negative reinforcement via stupid internet points.

No, a downvote is not a bullet, and an argument against bullets is not an argument against "small meaningless negative reinforcement via stupid internet points".

Regarding anthropic, reread Nostalgebraist's post.

Revisiting this conversation with more time in hand, I'm not sure which post you're talking about. RITOT has nothing to do with Anthropic as far as I can tell, and Google seems to turn up this:

https://www.tumblr.com/nostalgebraist/778409187704193024/anthropics-stated-ai-timelines-seem-wildly

Which doesn't seem to be a criticism of Anthropic's research, just a claim that their timelines are too aggressive.

I'm running about 80 miles/a week these days

Impressive. That's a lot. I'm at half that, and with a lifting schedule too, I go to bed feeling beat up most days.

The total salary of all therapists is surely far higher than the combined salary of all nuclear engineers? I tried to find aggregate employment figures and failed.

Broadly, there are huge amounts of people who are very lonely and unable realistically to fix that. I think the value from providing a real-enough friend to them would be vastly more valuable in both utilitarian and monetary terms than almost anything else. I hope of course to move to an open, almost-free solution.

Fair point. That response was less than maximally pro-gun, but it is 1. is mostly on the topic of suicide, 2. still pretty lukewarm, and comes with a healthy amount of throat clearing: "I'm not arguing that this, in itself, is a persuasive argument in favour of banning guns, and can see the merits of both sides of the debate (particularly the "guns as a check against encroaching authoritarianism" argument advanced by many, including Handwaving Freakoutery, formerly of these parts)".

Why is this comment +10,-16 for merely making an argument? Or this one? +10,-12

Bad argument gets counterargument. Does not get bullet. Does not even get small meaningless negative reinforcement via stupid internet points.

It's good to have you lay out the evidence behind your claims, better late than never. I must note that that's not the point, both me and Nara are asking you to submit such evidence proactively, and not after moderation.

You do not need citations for saying that water is wet. But if you are making an inflammatory claim (and someone arguing that they didn't think it was inflammatory is not much of an excuse), then you need to show up and hand receipts before being accosted by security.

Yes, SIG would like very much if people all said the 320 was fine. That's what your link says. It's just rewritten SIG ad copy, as is obvious from the very first sentence:

New Hampshire gun-maker Sig Sauer is asking two federal agencies — the FBI and the Department of Homeland Security — to vouch for its embattled P320 pistol.

The entreaty comes as part of a lengthy statement Sig Sauer released July 29 as it continues to push back against allegations — some in the form of lawsuits — that the P320 is unsafe.

The number of terrible takes on AI on this forum often seem to outweigh even the good ones.

Have you considered that you might be the one whose takes are the terrible ones because LLMs match your desires and thus validate your pre-existing pro-AI future biases? From an outside perspective everything I’ve seen you write about LLMs matches the sterotypical uncritical fanboys to the tee. Always quick to criticize anyone who disagrees with you on LLM, largely ignoring the problems, no particular domain expertise in the technology (beyond as an end user) and never offering any sort of hard proof. IOW, you don't come across as either a reliable or a good faith commenter when it comes to LLMs or AI.

As you wish.

evidence points towards an advantage of men over women in fluid intelligence (Gf) [2]–[4], but also in crystallized intelligence (Gc) and general knowledge [5], [6].

https://pmc.ncbi.nlm.nih.gov/articles/PMC4210204/


Women’s ways of knowing, the seminal work on women’s development theory, by women:

The first 3(lowest) among the 5 types of women’s ways of knowing are:

The Silence: These women viewed themselves as being incapable of knowing or thinking, appeared to conduct little or no internal dialogue and generally felt no sense of connection with others.

Received Knowledge: Received knowledge describes the epistemological position in which women in the study perceived knowledge as a set of absolute truths received from infallible authorities. Received knowers tended to find disagreement, paradox or ambiguity intolerable since these violated the black-and-white absolutist nature of knowledge .

Subjective knowers rely on their own subjective thoughts, feelings and experiences for knowledge and truth - the "infallible gut" as Belenky, Clinchy, Goldberger and Tarule refer to it. Along with the nascent discovery of the inner voice, subjective knowers showed a general distrust of analysis and logical reasoning and did not see value in considering the weight of evidence in evaluating knowledge. Instead, they considered knowledge and truth to be inherently personal and subjective, to be experienced rather than intellectualized.[1] Belenky, Clinchy, Goldberger, and Tarule state that subjective knowers often block out conflicting opinions of others, but may seek the support and affirmation of those in agreement.[1] The authors note that half of the women in their study occupied this position, but that they were spread across the full range of ages.[

Much like Kohlberg, who found that women were on average, stuck at a lower level of moral development than men, they found that most women are epistemiologically stuck in early adolescence (the infallible gut people):

Relationship to Perry's cognitive development theory

Subjective knowledge is similar to Perry's multiplicity, in that both emphasize personal intuition and truth.[4] However, Perry identified the typical age of the transition to multiplicity as early adolescence, while the women in the above study exhibited this transition over the whole spectrum of ages studied.

I don't know if I buy that it's just "simping". Organic trends come and go, they aren't usually capable of maintaining world-spanning activist infrastructure for decades on end.

No, intelligence, how the brain operates and the interconnected nature of overall intelligence in human beings is quite complex. A lot of decision making apparently isn't done just by the brain.

Our limited understanding and an even more limited implementation is not likely to lead to even more intelligent synthetic beings on the current path. AI soyjacking is the only acceptable religious belief in rat spheres.

Asking for a ranked list sounds like a great solution, sometimes it is wrong even when its not being sycophantic (which I don't mind its not magic and the information I am giving as someone with no clue what I am doing is imperfect at best) so that sounds like a two birds one stone kind of fix.