The idea that right-wing dominated forums would have any sort of moral superiority (in terms of the average rate of dogpiling, etc. behavior by users) over left-wing dominated ones in some general/average/typical/categorical sense - or the reverse - is one I find so utterly absurd* and detached from reality that it's something that I'd only charitably infer as a claim by someone on TheMotte if they actually explicitly make that specific claim. I don't see anything in the comment to which you replied nor this comment thread in general where I would feel comfortable inferring such a claim. It seems to me that the comment is about the experience of individual users who tend to "rock the boat" with respect to the dominant side in the forum.
And it seems to me to be about the specific state of things right now; i.e. leftists do have easy access to lots of online echochambers in a way that rightists don't; as such, regardless of how both leftists and rightists are exactly as likely to fall victim to their natural human biases when managing forums they dominate, we see an emergent property of the types of leftists and rightists who congregate at different types of forums. I'm not the original commenter, but it seems to me that reading some sort of moral judgment, either about groups, forums, sides, etc. in the comment to which you replied is jumping to conclusions.
* I find this absurd, because, besides being just completely obviously completely impossible to adjudicate in any way, it also has basically no consequence for anything in terms of how people actually interact with each other and forums. It's not as if there's some movement that claims, "right-wingers are non-coincidentally, non-incidentally, morally superior to left-wingers in terms of maintaining good faith discussion forum standards, and therefore, for the betterment of discourse throughout society, we should make all forums right-wing dominated" or whatever (except in the tautological case where people redefine "values logic, empiricism, evidence over emotion" as "right-wing."). I mean, maybe there is, but it's certainly not one that has any meaningful influence.
If 70k out of 100k of comments on a reddit forum are "boo-outgroup" vs 800/1k on the motte, the motte is far more "boo outgroup" despite there being overall less motte "boo-outgroup" comments. The rate is much higher. Your stated ""rate" is per instance or total count. This manipulates statistics to give the lower population forum more grace when per-captia is more honest, because it accounts for the confounding factor of the lower population.
By controlling for the overall population of the forum, you're abstracting away the actual interesting part. If your purpose is to judge the average morality of the users of a forum, as measured by their penchant to dogpile, etc. commenters who try to rock the boat, then sure, you can use that metric. But I'm not sure that's of particular interest to anyone, and I'm certain that that is so abstracted away from the way people interact with online forums that no one can make any sort of meaningful intuitive guesses about such things, especially when comparing numbers that are orders of magnitude apart.
That's why when people talk about places like this/Reddit being unfair or hostile to leftists/rightists, I believe that it tends to be about a typical (boat-rocking) leftist/rightist's experience in using that forum, not about some sort of average of how commenters tend to react to such comments. Perhaps I'm wrong, and most people talking about such things are using your abstract metric; I just don't know what use that metric would be other than for some sort of a virtue-measuring contest between forums.
If this is your major point then you are making a point I am not arguing, its not about quantity it's that it happens at all. This place has orders lower magnitudes of people than the mirror image typical subreddit. This is like saying it's safer to be be next to a bear in the woods because bears kill less people then men do.
If this is your interpretation of my point, then you are wrong. The "rate" is on a "per-[leftist/rightist] comment (implicit: that bucks the general popular trends of the forum)", not on a "per-day" or whatever. If the rate of physical injuries during a typical encounter with a bear in the woods was lower than the rate of such during a typical encounter with a man, then it absolutely would be safer to be next to a bear.
I'm not really going to weigh into a discussion of "quality". That is highly subjective, to the point, that one could easily just say every post that gets dog-pilled and mass-reported was "low quality". It's a just-so-story.
If you aren't going to weigh into "quality," then all you're really doing is commenting on the lack of equality of outcomes (as measured by things like responses that amount to dogpiling, Gish Galloping, etc.) based purely on left-right-partisanship. And that's just irrelevant here, because the point of this forum isn't to achieve such equity. Quality is highly subjective, but it's also not infinitely so, and there are certainly qualities which are agnostic to partisanship that this forum specifically demands of the comments both by rule and by norms, and it is a good thing that a comment's quality determines, in a large part, the pushback it gets from other commenters.
Lefties here are absolutely dogpilled, mass-replied, gish-galloped, mass reported, or downvoted.
As a lefty (in multiple senses of the word) here, I disagree heavily. The rate at which this happens is orders of magnitude lower than the mirror image in a typical subreddit that has discussion about similar topics as here. By my observations, leftist posters who get treated this way are almost always treated this way in response to particularly careless or bad-faith posts*.
* Aside: these extremely low quality posts often have characteristics which appear to me as posts that would be popular on a typical subreddit; my conjecture is that these commenters are used to calibrating their arguments for the type of scrutiny in those environments and didn't properly re-calibrated for the standards of this forum before commenting.
Reads like consensus building.
...
It's no secret that the median individual on this forum has been steadily shifting right.
Ummm...
Reading your response before reading the quoted part (though I caught the capitalized names in the quote and registered them mentally), I had assumed the phrase was meant to invoke someone laundering the current Overton Window as a way to crystalize it where it is, rather than someone expanding or shifting the current Overton Window. Since we don't have an equivalent job for making windows bigger or moving them around in a wall (though hopefully nanotech will be so cheap in the future that using it for some silly feature like sliding the windows in your home to any arbitrary configuration at any time will be taken for granted), I can't think of a term that I'd find more appropriate.
I don't think LLMs can generate meaningful human-like feedback of what it feels like to use the software. They just don't see the UI in the way that humans do.
I don't see why LLMs would need to "see" the UI in a way similar to humans in order to generate meaningfully useful feedback for improving the UI (as well as any other element of the software) as judged by humans. It's not like the LLM would need to reason out "this UI element here gets in the way of this process due to that issue, etc." or "in my experience of trying to use this software in my workflow, this UI element could be improved by moving it here," or whatever. It'd be doing naked dumb pattern matching, of predicting words based on the prompt (which would include the sequence of 1s and 0s that make up the software, as well as instructions to produce text that a helpful human tester would provide, or the like) and its weights. There's no proof that this would work, but I also see no reason why simply scaling up current techniques and/or making them faster wouldn't allow LLMs to generate feedback like this which is just as useful as human user feedback.
Yes, Adobe Premier is a few million lines of code, and LLMs can create millions of lines of code within weeks. However, Adobe premier wasn't one-shotted by a person, and an agent can't one-shot it either. The only way to build an excellent enterprise tool is to build a shitty enterprise tool, get feedback, and improve it with time. In startup speak, this private feedback is referred to as 'moat'. LLMs make this loop faster, but you can't skip it.
The value in the text/images/media/any content that form the feedback comes from how modifying the software in a way guided by the feedback improves the software as judged by the people who gave the feedback (and people like them), not in the fact that content was generated by humans using the software and expressing their opinions. Generating the feedback that way through actual humans who used the software is a great way to ensure that that the feedback is valuable in this way, but I don't see why a sufficiently advanced LLM (or LLM-based tool) couldn't generate that feedback with just as much value (i.e. modifying the software in a way guided by that LLM-generated feedback improves the software as judged by the people who would have given the feedback, i.e. target audience), just by predicting the next word. And then modify the software through iterations until the feedback crosses some threshold of asking for small enough changes or something. I don't think this would be considered a "one-shot," but it certainly seems like it would require almost as little investment in human effort. It's just that the LLM-based tools don't seem sufficiently advanced (or perhaps they're not sufficiently fast?).
Lol. What were you doing on the platform?
Evidently, trying to produce videos that OpenAI disapproved of. More precisely, IIRC, I was inspired by some videos I saw on Sora where, apparently through some clever prompting and/or iterations, the user had managed to generate and share a video of a woman doing yoga, shown from suggestive angles. I was experimenting with doing the same when I got the ban email. Grok Imagine has lines as well, as I alluded to before, but it places its lines very very far from where Sora did.
I wouldn't quite put it to the level of Shakespeare words. Fortnight wasn't a common everyday word when I was a kid in the 90s, but most high schoolers would have been familiar with the word and what it meant, if memory serves, likely due to its usage in history class when reading documents relating to Revolutionary War or the Civil War. Which were only about 150-250 years ago, not 400 like Shakespeare.
I do wonder if kids these days know that Fortnite is a play on that word and what that word means, or if they just think it's some nonsense made-up word.
Wow, that phenomenon seems really reminiscent of how "sexual assault" became a catchall term to be used when the speaker wants to create the connotation of forcible rape in contexts where the reality is some sort of harassment of a sexual nature. As well as how "sex trafficking" became a catchall term to be used when the speaker wants to create the connotation of kidnapping women into sexual slavery in contexts where any level of prostitution took place. I've come to really dislike these overt attempts to engineer language for the purpose of hiding covert attempts to manipulate others into believing things that one finds useful for others to believe and personally always just call them "child porn" and "revenge porn."
On the other hand, most of the effort with a commercially viable video generation product is in the product engineering, not the model itself. That's asking a lot lot of effort from OpenAI in an area they are not best equipped to beat seasoned product engineering teams at.
It seems evident by their actions that engineers at OpenAI lacked the ability or capacity to use GPT5 to cost-effectively write an Adobe Premiere competitor but with Sora-integration, with UI that's just as good, just as intuitive and user-friendly for longtime video editing professionals, just as stable and responsive, etc.
I wonder if/when AI companies will reach the point where they could just do that for any arbitrary existing software. At what point could one of these companies just instruct the AI to generate an Excel clone that has perfect backwards compatibility to MS Excel, but also has their AI integrated in, and consistently get out a viable software product as the result? What about a Windows clone that has perfect compatibility to all Windows-compatible software, but also has their AI integrated in? What about an Oblivion clone that has perfect compatibility to all existing save files, but also doesn't require major QOL overhaul and performance mods to make enjoyable and also has their AI integrated in?
It appears to me that these issues could probably be solvable. Disallowing generative editing of user-uploaded images seems like a no-brainer.
Grok Imagine has indeed implemented something like this, where uploaded images of real humans become extremely difficult to edit without triggering a censor, and I think videos might be right out. Unfortunately, even this gimped censor is severely limiting, and a full-on prevention of generative editing or animation of user-uploaded images would take away like 90% of the use cases for image/video generative AI. Since so much of using gen AI to produce images and videos is about trial-and-error and iterations of manual edits -> AI generation building on it -> manual edits of AI generations -> AI generation building on it -> etc., including using multiple different non-interoperable AI tools (e.g. generate original image in Midjourney, edit it locally using Krita and Stable Diffusion, then upload it to Grok AI to animate), lack of ability to take arbitrary image input would leave it as only the origin point for the workflow, which doesn't amount to much, or just simple time-waster slot machine generations.
Personally, I do not like the notion that it's possible to arrange pixels in an illegal way that doesn't involve some other independently illegal action as a causal factor and hope that attempts to make it so fail horribly. Unfortunately, I'm not hopeful, as it seems to me that support for free speech and free thought isn't very high right now in the USA. This is also why I'm still holding out hope that video gen on local hardware will become "good enough" that private servers owned by companies like OpenAI, xAI, Google, etc. don't become effective gates for this sort of creative endeavor for the layman (or at least lay enthusiast).
I tried using Sora for about a month at the end of last year, but I had to stop due to getting banned. Grok Imagine wouldn't ban me, so I've been using that instead. My wild guess is that a social media platform based entirely around AI generated videos like Sora can only exist in a sustainable way if it's explicitly for erotic/pornographic material - there's simply not enough demand for creating or viewing AI generated videos that aren't in that category to get enough users and views to pay for the generations.
I've mixed feelings, since Sora was clearly much better and more flexible than Grok imagine, and so I would've loved to see that develop further, but, at the same time, the lower censorship in Grok and XAI's general attitude towards censorship versus OpenAI's makes me think improvements in Grok is more likely to bear fruit. Of course, without Sora around, XAI has less reason to improve Grok... And Grok Imagine is also still censored, which isn't great, but it's the least worst, at least. In the long run, I'd hope that local video generation will be "good enough," but that'll probably require a world where dual 5090s with 64gb VRAM is considered a quaint little living room computer for sending emails and running old games at a tolerable 25fps, which I'm guessing is within 2 decades.
Do they genuinely think that a world where normalizing blockades of international shipping is one that they would actually want to live in?
The answer to your question is No, because the answer to the first 4 words of your question is also No.
That just the baseline reality that everyone already baked into their model of the world. The question is, why do some ideologies appear to have more of this kind of abuser in leadership roles than others? Of course, that's either a trivial question (answer: because what things appear to you is primarily determined by your biases, rather than underlying reality) or a loaded question (i.e. the question is implying that this "appearance" correlates with reality), and so the "clean" version of that question would be: "Do some ideologies actually have more of this kind of abuser in leadership roles than others, and if so, is there something about the psychology of these ideologies that leads a difference in prevalence of this?"
Which is an interesting question to at least speculate about, though any actual conclusions would be completely unwarranted.
Being part of any sacrosanct Noble Cause can do it, if the cause's actually-noble followers are afraid that making ignoble leaders' transgressions public would unfairly reflect badly on the Cause - this works if the Cause includes an "ultimate truth", but it also shows up in non-profits, charitable organizations, environmentalist organizations, police organizations... Even a mundane worry like "we don't want to scare kids away from the Boy Scouts just because of this one bad apple" can do it, for a while.
I pseudo-apologize pre-emptively for bringing up my favorite hobby horse/pet peeve, which is that these so-called "actually-noble followers" are actually not noble, due to their actions, i.e. prioritizing their Cause's optics over justice for the victims of their leaders. As you say, if you believe that the Cause has some "ultimate truth" that supersedes all else (which, IME, applies exactly as well and often to non-profits, charitable organizations, etc. as any other religious organization), you can justify this line of thinking.
However, the issue there is that no truly noble follower of any Cause would be ignorant of the pattern of people who have followed some Cause in the past; to follow a Cause without skeptically analyzing the forces that would lead you to being convinced by the Cause is something I'd consider unambiguously ignoble. And one pattern that any follower of any Cause must notice is that most people (likely almost everyone) in the past who was convinced by a different Cause was wrong. Therefore, anyone who believes strongly in their Cause can't actually conclude anything about the correctness of their Cause; their strong belief in it doesn't provide any meaningful information for determining its correctness.
If God came down and proved His existence and then declared that This Cause is the Correct one, then perhaps noble followers of This Cause would be just in allowing [bad behavior] as a necessary cost for accomplishing This Cause. Perhaps. But, AFAICT, God never did that (and never existed, but that's a different conversation), and so we live in a world where the stupid ignoble are cocksure about their Cause while the intelligent noble are full of doubt about their Cause.
Unfortunately, being unjustly cocksure about something tends to be more attractive than being justly doubtful about something, and so it seems to me that basically any Cause is guaranteed to attract ignoble people near the top.
FYI as of 2024, a sequel to Resurrections was reportedly in the works. Don't know if it's going anywhere, but it seems like they were at least trying to do something more with it.
(hard to believe but as a young man [Vladimir Putin] was something of a pretty-boy).
Considering that, as an old man, he's something of a pretty boy, I don't find that hard to believe.
I see a bunch of essentially "we should" statements, which I interpret as "the world would be better if this were so:" and for your statements, I do agree. I also think the world would be better if we all got unicorns and rainbows and eternal life without illness beyond what's exactly needed to provide just the right amount of suffering and challenge for a good life. If only we could just "we should" into that world!
Now, perhaps such a world you describe, unlike one with unicorns (until we figure out genetic engineering to a sufficient extent), is possible to get to from here. I don't think it's obviously impossible. Figuring out if and how to change this world to that better world are the real challenges, not just "what would a better world look like." And you cannot expect women as a group or on average, to do anything other than be maximally sabotaging to such a project. Because giving men information and feedback on how to be attractive is something that directly harms women's ability to judge them as potential mates. So any project of getting to that better future from here must take into account their sabotage and route around it or through it. Their sabotage is as much a fact of life as death and taxes, so best to just accept it as it is instead of considering it bad or good.
It says something about the psychology of this particular ideology that so many prominent lefty leaders turn out to be rapists and/or pedophiles. It genuinely now seems like there are fewer such leaders, political or otherwise, in the last 100 years that DON'T have such credible allegations than those that do, now.
I mean, many things can genuinely seem some way without being true. I don't know that what you claim is in evidence, and certainly this one example of Chavez doesn't move the needle one way or another.
However, I do think there's a core truth here when it comes to the modern leftist movement which is and has been for about 1.5 decades, dominated by the movement that has been established as "woke." Not a unique characteristic, but certainly a defining one of "wokeness" is prioritizing identity over their behavior or speech* when it comes to judging the person in specific contexts where their behavior or speech would be consequential to the outcome. Combined with another defining characteristic of "wokeness" - automatic categorizing of all constructive criticism as bad faith malicious attempts at sabotage - this results in massive opportunities for people of the right identities to become leaders while engaging in horrible behavior, as long as that horrible behavior pays off in harm to people you don't care about.
Now, Chavez didn't exist in such an environment. He likely would have benefited in that environment, but he wouldn't have had enough oppression points to just get to the top without legitimate leadership skills. So I think his situation (and any others from eras past) likely had different causes.
* More accurately: having ready-available justifications for why to selectively prioritize identity or behavior/speech based on personal preferences.
If a society believes that Black people are less intelligent and more criminal, and they are wrong, millions of innocent people go through their lives with a boot stamping on their faces.
The problem is that if society believes the opposite of that, and they are wrong, then also millions of innocent people go through their lives with a boot stamping on their faces. There's no safe "false positives are clearly better/worse than false negatives" situation here that makes it easy to just err on one side. This is one of those legitimate Hard Problems that we need to actually do real scientific research to get right.
Sometimes, at the direct one-to-one cost to the male children of other women who aren't related to them, of course.
That's only a problem if you believe that men in general or on average deserve a fair chance at accomplishing things like romantic partnership, sex, children, family, general life satisfaction. But my observation is that women aren't concerned with that, and I doubt it's physically possible to make them concerned with that, in general. They're concerned with finding the highest quality partner for themselves, and the highest quality partner is heavily determined by the partner's genes, and so the point of the test is purely to discriminate, not to be a system that men can learn from in order to pass it. The entire point is that they should be able to pass it without any help, despite the, again, bizarre, contradictory, nonsensical nature of the test, which also has a horrendous feedback mechanism. If the tests fixed any of these things, then the tests would work less well.

So I don't know if the OP was motivated by that, or if there's some other reason, but I've definitely noticed what seems like a big dichotomy in the way people approach modern generative AI tools. Which is that, some types of people see a tool with its limitations that make it fail in spectacular ways that seem silly or stupid, throw their hands up in the air and declare it as not sufficiently useful for their purposes. Other types of people see a tool with its limited abilities and figure out a way to exploit their abilities to accomplish things they couldn't without the tool, even if it means adjusting and inventing new workflows.
I first noticed this when I got heavily into Stable Diffusion in ye olden dayes of 2022. Of course, awful hands, foreground lines merging into background lines, inconsistent lighting, hallucinations, were all famous issues of image generation AI then. They're still issues now, but vastly reduced. Some people saw that and declared AI useless for their needs, since their hand drawing allows for the control they need that AI doesn't. Other people saw that generating messes with 7 fingers was like making one bad brush stroke on an empty canvas and giving up on the painting, and figured out that it's easy to iterate on subsections of the image multiple times, allowing someone to create illustrations that are far beyond their manual ability while still avoiding the common AI pitfalls.
I noticed it happening with LLMs shortly after, where some people zero in on stupid mistakes like that the hard R problem of strawberries and declare it too inconsistent or too stupid to be of much use. Other people zero in on the limited abilities and figure out how to build structures and scaffolding to allow the tool to exceed those natural limitations, enabling them to create code that they couldn't have before or that would have taken a lot more time before.
I don't think the former type of person is doing this in bad faith, or with a desire to sneer. I think there's probably just a spectrum in people's attitudes with something like this, and because AI is both ridiculously bad at some things and ridiculously good at others, this causes the spectrum to bifurcate.
More options
Context Copy link