Primaprimaprima
INFJ 5w4 549 so/sp VELF IEI
"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."
User ID: 342
People on the left are by and large much more focused, in my experience, on experiential states, following the heart, and of course contemplative, mystical spiritual practice. [...] I think that the left I've broadly sketched here represents chaos, and the right represents order.
But isn't it the exact opposite?
Ok, maybe not the exact opposite. But it's more complicated than any 2D schema would suggest.
Leftists are not anarchic chaos agents. In fact committed leftists are inordinately concerned with order, justice, fairness, morality, and so forth. "We must stop Trump at all costs to defend American democracy" is order, not chaos, even if you think it's an order that's based on faulty reasoning and ulterior motivations. "Yeah let's let the reality TV star become President and see what happens" is chaos. Ironically, self-identified anarchists often have a fetishistic preoccupation with structure, discipline, and power. The responsibility for actually enforcing this discipline is "distributed" (or so they claim) in order to avoid individual culpability, but the underlying structural dynamics are clear.
The dream of Marxism is a fully transparent social order grounded in pure reason. It's irrational that billionaires get to own multiple yachts while there are still people who can't afford medical care. We can use our brains to figure out a more fair way to distribute resources, instead of leaving it up to the irrationality of the market. There are no limits to what the unfettered human mind can accomplish. That's the basic impulse.
Of course, you might start to question how "rational" your opponents really are, if you think they're tenaciously holding onto premises that are incoherent or have been falsified. But, naturally, they would just turn that back at you and say that you're the one who's reasoning from incorrect premises. So we're back at square one.
Rejections of leftist utopian ideals are ultimately grounded in a rejection of the infinite power of reason: there are limits to how much reality can be rationally known and managed, there are things that can't be controlled or changed, etc.
(And while we're on the subject of Jordan Peterson: femininity is obviously order, and masculinity is obviously chaos. Wild how many people get this wrong.)
1189 comments at the time of this writing, which is close to the all time low but we've had a few other threads in the same neighborhood since the site migration. Around 1500 has been the average for probably over a year now.
I did make a bunch of posts about it a while back, but I lost interest after it became clear nothing was going to happen and I haven't followed anything UAP related in like a year lol.
Of course if we do get confirmation about something genuinely anomalous then that would be wonderful!
If you consider Nick Land rightist, he’s a hardcore accelerationist. (Honestly I’m having trouble thinking of who else would even qualify as a “rightist philosopher” these days but that could just be my own ignorance.)
you're going to go with "it undermines the human soul", huh?
It’s a thankless job, but someone’s gotta do it.
The left absolutely hates AI.
I don't think there's an especially strong correlation between political orientation and attitudes towards AI. Rather, anyone on the left or the right can be pro-AI or anti-AI, but they'll do so for different reasons, giving us four basic quadrants:
-
Leftist and anti-AI: it's spreading misinformation and eroding job opportunities for academics, writers, and artists, constituencies who tend to lean overwhelmingly left (rarely will they phrase it so bluntly, but that's clearly one of the major underlying motivations).
-
Leftist and pro-AI: it's contributing to the democratization of knowledge and creating new opportunities for intellectual and artistic expression for the differently abled.
-
Rightist and anti-AI: it's a threat to traditional values, it undermines the human soul, it's a Satanic deception designed to lead us astray from the path of righteousness. This quadrant is populated, but it might be the smallest quadrant. There aren't as many people here as I would expect, and the people who are here tend to skew older (think Alex Jones and Fox News talking heads). I've noticed that a lot of hippy-dippy types who are into astrology and healing crystals and etc are actually surprisingly gung ho about AI, happily using it to generate book covers, using it as a teacher or conversation partner, etc, which indicates to me that something has gone wrong in my models (in fact the few people I've seen who analyze and criticize AI from a "humanistic" angle tend to be leftists).
-
Rightist and pro-AI: Elon Musk, Nick Land, tech bro accelerationism and utopianism, fuck yeah worker ownership of the Memes Of Production, we can finally generate infinite videos of Trump defecating on Mexican immigrants without relying on commie art school students. Popular on /pol/ and among the young right more broadly.
I remain perpetually confused as to how a group of Tough Minded Rationalists™, who believe in the invisible hand of the free market and facts over feelings, can be so concerned about birth rates.
Organisms that can adapt to their environment will reproduce. Those that can't will die off. So it always has been, so it always will be.
Why so much ire over nature taking its course? Any attempt to engage in large scale social engineering that would cause civilization to deviate from its current course in order to force it to align with an abstract values framework starts to sound a bit... socialist-y.
Weird. It's just imgur. And I promise it's perfectly SFW!
I kinda don't believe in utilitarians
Depends on what you mean by "utilitarian".
-
Actual, honest to God philosophical utilitarians who would happily kill someone if it meant they would be able to induce mild sexual gratification in a warehouse of BB(6) rabbits -- few, if any. Same way there are few if any people who genuinely hold any wacky philosophical position.
-
People who generally try to consciously try to align their values with a utilitarian framework, e.g. the people who are really involved with EA -- they definitely exist, and I see no particular reason to doubt the sincerity of their convictions. There is of course still plenty of room for personal bias to sneak in, but, you can say the same about any ethical system, including Natural Law, so I don't think that the utilitarians are any worse off here than others ("it just so happens that the Natural Law ordained by the creator of the universe aligns perfectly with what I wanted anyway, ain't that the darndest thing").
-
People who live in a "utilitarian manner", if it feels good then that's all that matters, thinking too "deep" about it is for nerds -- undoubtedly, there are many.
What if we had a ‘hear the other side’ cwr
…this IS the “hear the other side” thread.
for every Hlynka-stan who misses him, there is someone who was screaming at us to ban him for years.
"50% of the forum loves them and 50% hates their guts" is practically the definition of an interesting poster. If there's unanimous agreement that someone is a good contributor, then they may indeed be a "good" poster, but there's a cap on how interesting they can be.
And I've already written several times about how we did everything we could, short of just literally saying "The rules don't apply to Hlynka," to avoid having to permaban him.
My suggestion has always been that bans are capped at a length of one year, except in incredibly egregious cases (e.g. spam bots, or the person launched cyberattacks on the forum or something). I don't expect that this suggestion will ever actually be implemented, but it is a possibility nonetheless.
Go on, tell me who on this list was a valuable contributor who you think should be granted amnesty?
Hlynka is the primary example of course, also fuckduck9000, AhhhTheFrench, AlexanderTurok.
In fairness, people have been saying “the forum will die because you’re banning all the interesting people” for at least 5 years now.
On the other hand, we actually have banned some interesting people, and the forum is worse for their absence.
Can anyone explain the government shutdown to me? I haven't followed the story at all. If you consider yourself to be aligned with the Democrats, I'd especially like to hear your perspective.
After not following the news at all since the beginning, I casually overheard on Fox News that "the Democrats are keeping the government shut down over Obamacare". I assumed that that couldn't be right. Surely the whole thing couldn't be happening because of any one policy issue; there had to be more to the Democrats' side of the story. But then I started reading reddit comments and the consensus from leftists seemed to be that, yes, we really are keeping the government shut down over Obamacare, and this is Good and Righteous.
My initial reaction is that this seems rather petulant and childish on the part of the Democrats, because I think the minority generally should be expected to make concessions to the majority, but that's where my factual knowledge essentially ends so I'll let other people argue the case.
Well, it's a variation of the goat fucker problem. You can be an upstanding citizen your whole life, but if you fuck one goat, you're still a goat fucker. Similarly, it doesn't matter how many complex problems you can correctly solve; if you say that "entropy" spelled backwards is "yporrrtney" even once (especially after a long and seemingly lucid chain of reasoning), it's going to invite accusations of stochastic parrotism.
Humans make mistakes too, all the time. But LLMs seem to make a class of mistakes that humans usually don't, which manifests as them going off the rails on what should be simple problems, even in the absence of external mitigating factors. The name that people have given to this phenomenon is "stochastic parrot". It would be fair for you to ask for a precise definition of what the different classes of mistakes are, how the rate of LLM mistakes differs from the expected rate of human mistakes, how accurate LLMs would have to be in order to earn the distinction of "Actually Thinking", etc. I can't provide quantitative answers to these questions. I simply think that there's an obvious pattern here that requires some sort of explanation, or at least a name.
Another way of looking at it in more quantifiable terms: intuitively, you would expect that any human with the amount of software engineering knowledge that the current best LLMs have, and who could produce the amount of working code that they do in the amount of time that they do, should be able to easily do the job of any software engineer in the world. But today's LLMs can't perform the job of any software engineer in the world. We need some way of explaining this fact. One way of explaining it is that humans are "generally intelligent", while LLMs are "stochastic parrots". You're free to offer an alternative explanation. But it's still a fact in need of an explanation.
Of course this all comes with the caveats that I don't know what model the OP used, a new model could come out tomorrow that solves all these issues, etc.
What's a "stochastic parrot"? Well, it's something that talks like this, basically.
- Prev
- Next

Please do recommend this place to other people. We’re small enough that we really need the exposure. If they come here and then they don’t like it or can’t follow the rules then you can say well, guess it wasn’t meant to be.
More options
Context Copy link