I don't think LLMs can generate meaningful human-like feedback of what it feels like to use the software. They just don't see the UI in the way that humans do.
I don't see why LLMs would need to "see" the UI in a way similar to humans in order to generate meaningfully useful feedback for improving the UI (as well as any other element of the software) as judged by humans. It's not like the LLM would need to reason out "this UI element here gets in the way of this process due to that issue, etc." or "in my experience of trying to use this software in my workflow, this UI element could be improved by moving it here," or whatever. It'd be doing naked dumb pattern matching, of predicting words based on the prompt (which would include the sequence of 1s and 0s that make up the software, as well as instructions to produce text that a helpful human tester would provide, or the like) and its weights. There's no proof that this would work, but I also see no reason why simply scaling up current techniques and/or making them faster wouldn't allow LLMs to generate feedback like this which is just as useful as human user feedback.
Yes, Adobe Premier is a few million lines of code, and LLMs can create millions of lines of code within weeks. However, Adobe premier wasn't one-shotted by a person, and an agent can't one-shot it either. The only way to build an excellent enterprise tool is to build a shitty enterprise tool, get feedback, and improve it with time. In startup speak, this private feedback is referred to as 'moat'. LLMs make this loop faster, but you can't skip it.
The value in the text/images/media/any content that form the feedback comes from how modifying the software in a way guided by the feedback improves the software as judged by the people who gave the feedback (and people like them), not in the fact that content was generated by humans using the software and expressing their opinions. Generating the feedback that way through actual humans who used the software is a great way to ensure that that the feedback is valuable in this way, but I don't see why a sufficiently advanced LLM (or LLM-based tool) couldn't generate that feedback with just as much value (i.e. modifying the software in a way guided by that LLM-generated feedback improves the software as judged by the people who would have given the feedback, i.e. target audience), just by predicting the next word. And then modify the software through iterations until the feedback crosses some threshold of asking for small enough changes or something. I don't think this would be considered a "one-shot," but it certainly seems like it would require almost as little investment in human effort. It's just that the LLM-based tools don't seem sufficiently advanced (or perhaps they're not sufficiently fast?).
Lol. What were you doing on the platform?
Evidently, trying to produce videos that OpenAI disapproved of. More precisely, IIRC, I was inspired by some videos I saw on Sora where, apparently through some clever prompting and/or iterations, the user had managed to generate and share a video of a woman doing yoga, shown from suggestive angles. I was experimenting with doing the same when I got the ban email. Grok Imagine has lines as well, as I alluded to before, but it places its lines very very far from where Sora did.
I wouldn't quite put it to the level of Shakespeare words. Fortnight wasn't a common everyday word when I was a kid in the 90s, but most high schoolers would have been familiar with the word and what it meant, if memory serves, likely due to its usage in history class when reading documents relating to Revolutionary War or the Civil War. Which were only about 150-250 years ago, not 400 like Shakespeare.
I do wonder if kids these days know that Fortnite is a play on that word and what that word means, or if they just think it's some nonsense made-up word.
Wow, that phenomenon seems really reminiscent of how "sexual assault" became a catchall term to be used when the speaker wants to create the connotation of forcible rape in contexts where the reality is some sort of harassment of a sexual nature. As well as how "sex trafficking" became a catchall term to be used when the speaker wants to create the connotation of kidnapping women into sexual slavery in contexts where any level of prostitution took place. I've come to really dislike these overt attempts to engineer language for the purpose of hiding covert attempts to manipulate others into believing things that one finds useful for others to believe and personally always just call them "child porn" and "revenge porn."
On the other hand, most of the effort with a commercially viable video generation product is in the product engineering, not the model itself. That's asking a lot lot of effort from OpenAI in an area they are not best equipped to beat seasoned product engineering teams at.
It seems evident by their actions that engineers at OpenAI lacked the ability or capacity to use GPT5 to cost-effectively write an Adobe Premiere competitor but with Sora-integration, with UI that's just as good, just as intuitive and user-friendly for longtime video editing professionals, just as stable and responsive, etc.
I wonder if/when AI companies will reach the point where they could just do that for any arbitrary existing software. At what point could one of these companies just instruct the AI to generate an Excel clone that has perfect backwards compatibility to MS Excel, but also has their AI integrated in, and consistently get out a viable software product as the result? What about a Windows clone that has perfect compatibility to all Windows-compatible software, but also has their AI integrated in? What about an Oblivion clone that has perfect compatibility to all existing save files, but also doesn't require major QOL overhaul and performance mods to make enjoyable and also has their AI integrated in?
It appears to me that these issues could probably be solvable. Disallowing generative editing of user-uploaded images seems like a no-brainer.
Grok Imagine has indeed implemented something like this, where uploaded images of real humans become extremely difficult to edit without triggering a censor, and I think videos might be right out. Unfortunately, even this gimped censor is severely limiting, and a full-on prevention of generative editing or animation of user-uploaded images would take away like 90% of the use cases for image/video generative AI. Since so much of using gen AI to produce images and videos is about trial-and-error and iterations of manual edits -> AI generation building on it -> manual edits of AI generations -> AI generation building on it -> etc., including using multiple different non-interoperable AI tools (e.g. generate original image in Midjourney, edit it locally using Krita and Stable Diffusion, then upload it to Grok AI to animate), lack of ability to take arbitrary image input would leave it as only the origin point for the workflow, which doesn't amount to much, or just simple time-waster slot machine generations.
Personally, I do not like the notion that it's possible to arrange pixels in an illegal way that doesn't involve some other independently illegal action as a causal factor and hope that attempts to make it so fail horribly. Unfortunately, I'm not hopeful, as it seems to me that support for free speech and free thought isn't very high right now in the USA. This is also why I'm still holding out hope that video gen on local hardware will become "good enough" that private servers owned by companies like OpenAI, xAI, Google, etc. don't become effective gates for this sort of creative endeavor for the layman (or at least lay enthusiast).
I tried using Sora for about a month at the end of last year, but I had to stop due to getting banned. Grok Imagine wouldn't ban me, so I've been using that instead. My wild guess is that a social media platform based entirely around AI generated videos like Sora can only exist in a sustainable way if it's explicitly for erotic/pornographic material - there's simply not enough demand for creating or viewing AI generated videos that aren't in that category to get enough users and views to pay for the generations.
I've mixed feelings, since Sora was clearly much better and more flexible than Grok imagine, and so I would've loved to see that develop further, but, at the same time, the lower censorship in Grok and XAI's general attitude towards censorship versus OpenAI's makes me think improvements in Grok is more likely to bear fruit. Of course, without Sora around, XAI has less reason to improve Grok... And Grok Imagine is also still censored, which isn't great, but it's the least worst, at least. In the long run, I'd hope that local video generation will be "good enough," but that'll probably require a world where dual 5090s with 64gb VRAM is considered a quaint little living room computer for sending emails and running old games at a tolerable 25fps, which I'm guessing is within 2 decades.
Do they genuinely think that a world where normalizing blockades of international shipping is one that they would actually want to live in?
The answer to your question is No, because the answer to the first 4 words of your question is also No.
That just the baseline reality that everyone already baked into their model of the world. The question is, why do some ideologies appear to have more of this kind of abuser in leadership roles than others? Of course, that's either a trivial question (answer: because what things appear to you is primarily determined by your biases, rather than underlying reality) or a loaded question (i.e. the question is implying that this "appearance" correlates with reality), and so the "clean" version of that question would be: "Do some ideologies actually have more of this kind of abuser in leadership roles than others, and if so, is there something about the psychology of these ideologies that leads a difference in prevalence of this?"
Which is an interesting question to at least speculate about, though any actual conclusions would be completely unwarranted.
Being part of any sacrosanct Noble Cause can do it, if the cause's actually-noble followers are afraid that making ignoble leaders' transgressions public would unfairly reflect badly on the Cause - this works if the Cause includes an "ultimate truth", but it also shows up in non-profits, charitable organizations, environmentalist organizations, police organizations... Even a mundane worry like "we don't want to scare kids away from the Boy Scouts just because of this one bad apple" can do it, for a while.
I pseudo-apologize pre-emptively for bringing up my favorite hobby horse/pet peeve, which is that these so-called "actually-noble followers" are actually not noble, due to their actions, i.e. prioritizing their Cause's optics over justice for the victims of their leaders. As you say, if you believe that the Cause has some "ultimate truth" that supersedes all else (which, IME, applies exactly as well and often to non-profits, charitable organizations, etc. as any other religious organization), you can justify this line of thinking.
However, the issue there is that no truly noble follower of any Cause would be ignorant of the pattern of people who have followed some Cause in the past; to follow a Cause without skeptically analyzing the forces that would lead you to being convinced by the Cause is something I'd consider unambiguously ignoble. And one pattern that any follower of any Cause must notice is that most people (likely almost everyone) in the past who was convinced by a different Cause was wrong. Therefore, anyone who believes strongly in their Cause can't actually conclude anything about the correctness of their Cause; their strong belief in it doesn't provide any meaningful information for determining its correctness.
If God came down and proved His existence and then declared that This Cause is the Correct one, then perhaps noble followers of This Cause would be just in allowing [bad behavior] as a necessary cost for accomplishing This Cause. Perhaps. But, AFAICT, God never did that (and never existed, but that's a different conversation), and so we live in a world where the stupid ignoble are cocksure about their Cause while the intelligent noble are full of doubt about their Cause.
Unfortunately, being unjustly cocksure about something tends to be more attractive than being justly doubtful about something, and so it seems to me that basically any Cause is guaranteed to attract ignoble people near the top.
FYI as of 2024, a sequel to Resurrections was reportedly in the works. Don't know if it's going anywhere, but it seems like they were at least trying to do something more with it.
(hard to believe but as a young man [Vladimir Putin] was something of a pretty-boy).
Considering that, as an old man, he's something of a pretty boy, I don't find that hard to believe.
I see a bunch of essentially "we should" statements, which I interpret as "the world would be better if this were so:" and for your statements, I do agree. I also think the world would be better if we all got unicorns and rainbows and eternal life without illness beyond what's exactly needed to provide just the right amount of suffering and challenge for a good life. If only we could just "we should" into that world!
Now, perhaps such a world you describe, unlike one with unicorns (until we figure out genetic engineering to a sufficient extent), is possible to get to from here. I don't think it's obviously impossible. Figuring out if and how to change this world to that better world are the real challenges, not just "what would a better world look like." And you cannot expect women as a group or on average, to do anything other than be maximally sabotaging to such a project. Because giving men information and feedback on how to be attractive is something that directly harms women's ability to judge them as potential mates. So any project of getting to that better future from here must take into account their sabotage and route around it or through it. Their sabotage is as much a fact of life as death and taxes, so best to just accept it as it is instead of considering it bad or good.
It says something about the psychology of this particular ideology that so many prominent lefty leaders turn out to be rapists and/or pedophiles. It genuinely now seems like there are fewer such leaders, political or otherwise, in the last 100 years that DON'T have such credible allegations than those that do, now.
I mean, many things can genuinely seem some way without being true. I don't know that what you claim is in evidence, and certainly this one example of Chavez doesn't move the needle one way or another.
However, I do think there's a core truth here when it comes to the modern leftist movement which is and has been for about 1.5 decades, dominated by the movement that has been established as "woke." Not a unique characteristic, but certainly a defining one of "wokeness" is prioritizing identity over their behavior or speech* when it comes to judging the person in specific contexts where their behavior or speech would be consequential to the outcome. Combined with another defining characteristic of "wokeness" - automatic categorizing of all constructive criticism as bad faith malicious attempts at sabotage - this results in massive opportunities for people of the right identities to become leaders while engaging in horrible behavior, as long as that horrible behavior pays off in harm to people you don't care about.
Now, Chavez didn't exist in such an environment. He likely would have benefited in that environment, but he wouldn't have had enough oppression points to just get to the top without legitimate leadership skills. So I think his situation (and any others from eras past) likely had different causes.
* More accurately: having ready-available justifications for why to selectively prioritize identity or behavior/speech based on personal preferences.
If a society believes that Black people are less intelligent and more criminal, and they are wrong, millions of innocent people go through their lives with a boot stamping on their faces.
The problem is that if society believes the opposite of that, and they are wrong, then also millions of innocent people go through their lives with a boot stamping on their faces. There's no safe "false positives are clearly better/worse than false negatives" situation here that makes it easy to just err on one side. This is one of those legitimate Hard Problems that we need to actually do real scientific research to get right.
Sometimes, at the direct one-to-one cost to the male children of other women who aren't related to them, of course.
That's only a problem if you believe that men in general or on average deserve a fair chance at accomplishing things like romantic partnership, sex, children, family, general life satisfaction. But my observation is that women aren't concerned with that, and I doubt it's physically possible to make them concerned with that, in general. They're concerned with finding the highest quality partner for themselves, and the highest quality partner is heavily determined by the partner's genes, and so the point of the test is purely to discriminate, not to be a system that men can learn from in order to pass it. The entire point is that they should be able to pass it without any help, despite the, again, bizarre, contradictory, nonsensical nature of the test, which also has a horrendous feedback mechanism. If the tests fixed any of these things, then the tests would work less well.
This creates an inherently muddled message to men. "DON'T listen to the siren song of red pill grifters, DON'T give in to misogyny, DON'T become a parody of masculinity. That's VERY BAD."
"Okay okay, but what should I do instead?"
"Fuck you, figure it out yourself or die alone."
The issue here is that the muddling of the message is the point, and encoded in that above interaction is the clear message: "figure it out yourself [the first step of which is to ignore everything my peers and I tell you to do and learn to think for yourself]." Women want men who can figure things out for themselves, and the only way to discriminate between men who do and don't is to give them a hard, confusing, self-contradictory problem and then see which ones figure out the answer.
The problem is that some differences certainly aren't, but that doesn't mean other differences aren't. The questions here would be: (1) which characteristics of Iranian society/culture would need to change and how, for Iran to "turn into Afghanistan" as is meant by that statement here (presumably culturally/socially/governmentally or the like, rather than literally or demographically), and (2) are those changes within the bounds of what is possible in a population made of people who we would genetically categorize as "Persian."
Unfortunately, I feel like our level of knowledge on this kind of thing is akin to Archimedes's level of knowledge of orbital dynamics and special relativity. We just haven't done the (potentially centuries' worth of required) scientific work to actually gain meaningful insight into this.
Freedom favors the smart and responsible people who can control themselves and make good decisions. Freedom says you are accountable to yourself. Freedom is for the people who make better decisions in their life than a central bureaucrat on a power trip could do. If you're less happy in a free society, that's on you for overeating or being an asshole or choosing to gamble or whatever stupid shit you do.
This is one of those things that seems obvious, but also seems like it's not talked about nearly enough, to the extent that people actually don't understand it as obvious. I certainly wish the feminist movement talked more about these downsides and the fact that many women will end up less happy (and, quite possibly, less good for whatever they might judge as "good" in terms of their life), but that this is a worthy cost to pay for the freedom that feminism offers them. Because, right now, I see so many women being failed by the feminist movement, having been convinced that freedom won't have these severe and significant downsides and then conclude that their own lack of happiness despite their freedom means that the movement clearly needs to keep doing more until morale improves somehow both greater freedom and greater happiness is achieved. Without that grounding in actual reality - and the tradeoffs that are always present in reality - it's become a movement that just keeps inviting greater and greater justified pushback while leaving its supporters dissatisfied.
Of course, the market movement can stay irrational longer than you can stay solvent alive, and there's a sucker born every minute, so its inability to - and apparent lack of desire to - accomplish its stated goals doesn't mean that there's going to be some correction anytime soon.
But that doesn’t justify looksmaxxing to the extreme lengths some of these people go to. You can agree that looks are important without devoting thousands of hours to going from an 8 to an 8.5, for example. That is not a simple way to make bone smashing or leg lengthening necessarily rational.
I'm reminded of a line in some Op Ed (NYTimes or Boston Globe, IIRC) I read in early 2000s, where some Republican pundit was justifying his push back against president Bush, in part by saying something like, "When the car you're in has veered sharply to the right towards a cliff, the proper thing to do isn't to turn the wheel back to neutral; it's to turn it sharply back to the left." Looksmaxxing to the point of self-surgery like in Gattaca seems extreme to a demented extent, but it's a response to what I perceive as an environment in which the idea that looks don't matter at all has become the only allowable opinion to be expressed, to the extent that a significant portion of the population of those environments have decided to believe it, as expressed by their behaviors in terms of looks with respect to romance.
Either they're neuro-divergent to the point of suicidal credulity (in which case I don't trust that you actually read society's message correctly, there are implicit messages), very young or are actively in denial. Someone like Lindy West or the fat acceptance types are not unaware of their lower status, they reject it and reject anything that could fix it because they've decided a political situation is the only moral one. I suppose you can say that the last group were brainwashed into it but they're not ignorant. They're willfully opposed and you have to know what you're fighting to fight it.
This seems like just a semantic argument. Yes, these people are "aware" of these things happening, but, like you say, they "reject" it, because it's "immoral." Part of that rejection is the "suicidal credulity" and "denial," which causes them to lack an understanding of the fact that the reality of some fat acceptance type having "low status" due to their fatness is something that you can't rout society around through wishful thinking and bullying, at least not longer than the emperor can walk around naked before some kid asks why. I think that they don't know that their model of sexual attraction in society is useless in the face of the underlying reality, as evidenced by their behavior which leads to self-suffering, shows that they're still missing some core knowledge about how the "sexual marketplace [as] the manosphere describes" is accurate
But there's only so far you can get with the argument that people are this ignorant, that they think Chris Hemsworth takes his shirt off because women are attracted to Aussies.
The actual factually inaccurate but morally right explanation is that the only reason Hemsworth's good body attracts women is that women have been hopelessly brainwashed to value those things (similarly to how men have been hopelessly brainwashed to value youth, skinniness, etc. in women), and that simply freeing them from the brainwashing would make women exactly as attracted to Danny Devito as to Chris Hemsworth if their personalities were the same (similarly to how simply freeing men from the brainwashing would make them exactly as attracted to Oprah Winfrey as to Sydney Sweeney if their personalities were the same - that this hasn't happened indicates that we must free them even harder from their oppressive brainwashing that they cling on to so hard). This kind of thinking is basically universal in most Blue tribe environments I've been in (which has been roughly 3.5 decades in a row now), due to many Blue tribe environments enforcing this ignorance through heavy censure of any sort of inquisitiveness or curiosity at analyzing the situation (in a way that isn't intentionally biased in order to arrive at the Morally Right conclusions).
Here's the truth nuke: Clavicular is not an incel. He is living proof of the sexual marketplace the manosphere describes, which is heavily determined by looks, money, height, race, social status, etc. He pulls taken women with minimal effort.
Everyone already knows this.
That's not true, though, unless you're using some sort of "subconsciously know in a way that is directly contradictory to their behavior, their words, and their conscious beliefs" meaning of "knows" here. We have vast swathes of the population who genuinely believe that the part quoted above is merely the delusions of old, crusty, conservative ignoramuses who don't understand the Correct Feminist way that romantic relationships actually work among humans. The existence of these vast swathes who don't know this is pretty much why incels have become a noticeable issue at all in the culture wars.
- Prev
- Next

Reading your response before reading the quoted part (though I caught the capitalized names in the quote and registered them mentally), I had assumed the phrase was meant to invoke someone laundering the current Overton Window as a way to crystalize it where it is, rather than someone expanding or shifting the current Overton Window. Since we don't have an equivalent job for making windows bigger or moving them around in a wall (though hopefully nanotech will be so cheap in the future that using it for some silly feature like sliding the windows in your home to any arbitrary configuration at any time will be taken for granted), I can't think of a term that I'd find more appropriate.
More options
Context Copy link