Arrival is a direct adaptation of Ted Chiang's short story "Story of Your Life". I love Ted Chiang, but this is one case where I think the adaptation is marginally superior to its source material. Villeneuve and his screenwriter are to be commended, not just for adapting a short story which is aggressively uncinematic and cerebral, but for doing so faithfully and in a way which is engaging throughout. I'd be curious to know if Chiang has ever read Slaughterhouse-Five.
Of course, the concept behind both these books could have been come up with independently by both authors, but given the time periods, the extreme similarities, and Vonnegut's stature, it would be truly shocking to me if Chiang had never read Slaughterhouse Five, or at least a summary of it. It'd be like some prominent author writing a successful story about a prince whose father is murdered by his uncle deciding to orchestrate a revenge plot never having read Hamlet by Shakespeare or a summary of it.
Personally, I wouldn't even characterise Slaughterhouse-Five as postmodern literature. It's a very short and accessible novel which employs a sci-fi* premise in order to make a powerful anti-war statement.
Having read this for the first time around 10 years ago in my late 20s or early 30s, I generally agree. However, I must admit that I don't recall the book making an anti-war statement, powerful or not. I listened to the audiobook of All Quiet on the Western Front after I read Slaughterhouse Five, but looking back, the latter book reminded me a lot of the former, in describing the horrors of war in basic, matter-of-fact ways, i.e. the famous "so it goes."
I'll also add that, the scifi film Arrival came out while I was close to finishing the book, and it was kinda surreal watching that film and realizing in-the-moment that the core scifi concept was pulled directly from that book.
I wonder if it has to do with what seems likely to be fact that the average age at which males become acquainted with porn has been decreasing throughout recent generations, such that a far larger proportion of boys under 18 - and even under 13 - have consumed significant amounts of porn in 2026 than in 2006 or 1986. And due to current laws, this means that these boys have spent some of their most formative years admiring and feeling pleasure watching women who are older than them.
Also the fact that, throughout the generations, the length of time that someone looks like a young adult is increasing. Even if you go back just to the 90s - and certainly if you go back to the 60s - the proportion of people in their 30s who look like they could be 45 versus who look like they could be 25 seemed much higher.
I have no first-hand experience of Europe in the 90s, but growing up in America in the 90s, that Europeans looked down on America and Americans for being backwards religious conservative hyper-capitalists without basic human decency like universal healthcare was pretty much cliche in my experience. Obviously this was strongly a function of the environment in which I grew up, but I don't think it was purely a function of that. So, at the very least, Americans admiring Europeans based on the belief that those Europeans have disdain and contempt for America for its American qualities has been around for 30+.
This is Boston area, and the ultimate Frisbee guys are primarily nerdy college educated professionals in some field, including tech. Elite colleges don't seem any more overrepresented in this group than any other group of college grads; I can only name one ultimate Frisbee guy I know who went to one: Columbia.
Now, they do enjoy drinking beer and watching sports, but I'd say not in a stereotypical male way. More like a stereotypical nerdy yuppie way, only as an outside social event at a bar, and people basically NEVER do things like casually ask each other, "Hey what'd you think of the game last night?" or whatever. We drink Bud Light Lime ironically at tournaments, but otherwise, it'd be very rare for one of us to be seen drinking a beer that's not some microbrew or less popular import or some quintuple IPA abomination.
A lot of them, you'd clock as nerds, some of them as hipsters, and very few as jocks. Though I'd say, due to the nature of a physically taxing sport like ultimate, the overt nerdiness level is pretty low.
I wonder how soon LLMs will become cheap and fast enough that all websites can be rewritten on-the-fly to match whatever UI style and format the user wants. I feel like the tech is pretty much there in browsers, but no human has the time to write a bespoke algorithm for each site they use, which also could change at any time. A sufficiently fast AI could fill that role and do it realtime, adapting to any changes by the website devs.
Because I too dislike the substack comment UI and wish Reddit (and Twitter too, for that matter) hadn't killed all competitor apps and forced us to use theirs.
You're experiencing a bubble effect. I'm an elder millennial, so my peers are mostly late 30s, but because I play ultimate Frisbee which is a college team sport, a lot of the people I hang out with are white males in their 20s as well. I know a grand total of one confirmed Trump voter among them, including myself (I'm white-adjacent enough to count, in terms of how vast swathes of society pre-judge me), and he will only mention it to me when we're the only ones hanging out or if we're with his friends who are barely my acquantiances. Now, I don't know the precise voting habits of every one of my white male acquaintances, but given just how ubiquitous it is to hear some random jab about Trump followed by the equivalent of "aye" or applause in any social situation, and how much pushback I receive when I try to call out dishonest or manipulative framing of Trump's misdeeds, I'd wager that the number of white males I know who even consider it virtuous to treat Trump honestly, much less supports him, is vanishingly small.
What is the typical White, Male, College-educated Democrat voter like? I was surprised looking at the cross tabs for the 2024 election that this group was only 50% pro Trump.
I was shocked by the 50% figure as obscenely high, but surely you mean White Male College educated, but not Democrat? If 1 out of every 2 White Male College educated DEMOCRATS support Trump, this would be quite the coup, almost literally.
I think white and male skews pro-Trump, but college skews heavily anti-Trump, and it lands somewhere around 50%. It speaks to the power of ideas over the power of race or sex that college is able to be equal and opposite those other forces, which also speaks to the utter idiocy of judging the value of words by the speaker's race or sex rather than the ideas they're expressing.
So I don't know if the OP was motivated by that, or if there's some other reason, but I've definitely noticed what seems like a big dichotomy in the way people approach modern generative AI tools. Which is that, some types of people see a tool with its limitations that make it fail in spectacular ways that seem silly or stupid, throw their hands up in the air and declare it as not sufficiently useful for their purposes. Other types of people see a tool with its limited abilities and figure out a way to exploit their abilities to accomplish things they couldn't without the tool, even if it means adjusting and inventing new workflows.
I first noticed this when I got heavily into Stable Diffusion in ye olden dayes of 2022. Of course, awful hands, foreground lines merging into background lines, inconsistent lighting, hallucinations, were all famous issues of image generation AI then. They're still issues now, but vastly reduced. Some people saw that and declared AI useless for their needs, since their hand drawing allows for the control they need that AI doesn't. Other people saw that generating messes with 7 fingers was like making one bad brush stroke on an empty canvas and giving up on the painting, and figured out that it's easy to iterate on subsections of the image multiple times, allowing someone to create illustrations that are far beyond their manual ability while still avoiding the common AI pitfalls.
I noticed it happening with LLMs shortly after, where some people zero in on stupid mistakes like that the hard R problem of strawberries and declare it too inconsistent or too stupid to be of much use. Other people zero in on the limited abilities and figure out how to build structures and scaffolding to allow the tool to exceed those natural limitations, enabling them to create code that they couldn't have before or that would have taken a lot more time before.
I don't think the former type of person is doing this in bad faith, or with a desire to sneer. I think there's probably just a spectrum in people's attitudes with something like this, and because AI is both ridiculously bad at some things and ridiculously good at others, this causes the spectrum to bifurcate.
The idea that right-wing dominated forums would have any sort of moral superiority (in terms of the average rate of dogpiling, etc. behavior by users) over left-wing dominated ones in some general/average/typical/categorical sense - or the reverse - is one I find so utterly absurd* and detached from reality that it's something that I'd only charitably infer as a claim by someone on TheMotte if they actually explicitly make that specific claim. I don't see anything in the comment to which you replied nor this comment thread in general where I would feel comfortable inferring such a claim. It seems to me that the comment is about the experience of individual users who tend to "rock the boat" with respect to the dominant side in the forum.
And it seems to me to be about the specific state of things right now; i.e. leftists do have easy access to lots of online echochambers in a way that rightists don't; as such, regardless of how both leftists and rightists are exactly as likely to fall victim to their natural human biases when managing forums they dominate, we see an emergent property of the types of leftists and rightists who congregate at different types of forums. I'm not the original commenter, but it seems to me that reading some sort of moral judgment, either about groups, forums, sides, etc. in the comment to which you replied is jumping to conclusions.
* I find this absurd, because, besides being just completely obviously completely impossible to adjudicate in any way, it also has basically no consequence for anything in terms of how people actually interact with each other and forums. It's not as if there's some movement that claims, "right-wingers are non-coincidentally, non-incidentally, morally superior to left-wingers in terms of maintaining good faith discussion forum standards, and therefore, for the betterment of discourse throughout society, we should make all forums right-wing dominated" or whatever (except in the tautological case where people redefine "values logic, empiricism, evidence over emotion" as "right-wing."). I mean, maybe there is, but it's certainly not one that has any meaningful influence.
If 70k out of 100k of comments on a reddit forum are "boo-outgroup" vs 800/1k on the motte, the motte is far more "boo outgroup" despite there being overall less motte "boo-outgroup" comments. The rate is much higher. Your stated ""rate" is per instance or total count. This manipulates statistics to give the lower population forum more grace when per-captia is more honest, because it accounts for the confounding factor of the lower population.
By controlling for the overall population of the forum, you're abstracting away the actual interesting part. If your purpose is to judge the average morality of the users of a forum, as measured by their penchant to dogpile, etc. commenters who try to rock the boat, then sure, you can use that metric. But I'm not sure that's of particular interest to anyone, and I'm certain that that is so abstracted away from the way people interact with online forums that no one can make any sort of meaningful intuitive guesses about such things, especially when comparing numbers that are orders of magnitude apart.
That's why when people talk about places like this/Reddit being unfair or hostile to leftists/rightists, I believe that it tends to be about a typical (boat-rocking) leftist/rightist's experience in using that forum, not about some sort of average of how commenters tend to react to such comments. Perhaps I'm wrong, and most people talking about such things are using your abstract metric; I just don't know what use that metric would be other than for some sort of a virtue-measuring contest between forums.
If this is your major point then you are making a point I am not arguing, its not about quantity it's that it happens at all. This place has orders lower magnitudes of people than the mirror image typical subreddit. This is like saying it's safer to be be next to a bear in the woods because bears kill less people then men do.
If this is your interpretation of my point, then you are wrong. The "rate" is on a "per-[leftist/rightist] comment (implicit: that bucks the general popular trends of the forum)", not on a "per-day" or whatever. If the rate of physical injuries during a typical encounter with a bear in the woods was lower than the rate of such during a typical encounter with a man, then it absolutely would be safer to be next to a bear.
I'm not really going to weigh into a discussion of "quality". That is highly subjective, to the point, that one could easily just say every post that gets dog-pilled and mass-reported was "low quality". It's a just-so-story.
If you aren't going to weigh into "quality," then all you're really doing is commenting on the lack of equality of outcomes (as measured by things like responses that amount to dogpiling, Gish Galloping, etc.) based purely on left-right-partisanship. And that's just irrelevant here, because the point of this forum isn't to achieve such equity. Quality is highly subjective, but it's also not infinitely so, and there are certainly qualities which are agnostic to partisanship that this forum specifically demands of the comments both by rule and by norms, and it is a good thing that a comment's quality determines, in a large part, the pushback it gets from other commenters.
Lefties here are absolutely dogpilled, mass-replied, gish-galloped, mass reported, or downvoted.
As a lefty (in multiple senses of the word) here, I disagree heavily. The rate at which this happens is orders of magnitude lower than the mirror image in a typical subreddit that has discussion about similar topics as here. By my observations, leftist posters who get treated this way are almost always treated this way in response to particularly careless or bad-faith posts*.
* Aside: these extremely low quality posts often have characteristics which appear to me as posts that would be popular on a typical subreddit; my conjecture is that these commenters are used to calibrating their arguments for the type of scrutiny in those environments and didn't properly re-calibrated for the standards of this forum before commenting.
Reads like consensus building.
...
It's no secret that the median individual on this forum has been steadily shifting right.
Ummm...
Reading your response before reading the quoted part (though I caught the capitalized names in the quote and registered them mentally), I had assumed the phrase was meant to invoke someone laundering the current Overton Window as a way to crystalize it where it is, rather than someone expanding or shifting the current Overton Window. Since we don't have an equivalent job for making windows bigger or moving them around in a wall (though hopefully nanotech will be so cheap in the future that using it for some silly feature like sliding the windows in your home to any arbitrary configuration at any time will be taken for granted), I can't think of a term that I'd find more appropriate.
I don't think LLMs can generate meaningful human-like feedback of what it feels like to use the software. They just don't see the UI in the way that humans do.
I don't see why LLMs would need to "see" the UI in a way similar to humans in order to generate meaningfully useful feedback for improving the UI (as well as any other element of the software) as judged by humans. It's not like the LLM would need to reason out "this UI element here gets in the way of this process due to that issue, etc." or "in my experience of trying to use this software in my workflow, this UI element could be improved by moving it here," or whatever. It'd be doing naked dumb pattern matching, of predicting words based on the prompt (which would include the sequence of 1s and 0s that make up the software, as well as instructions to produce text that a helpful human tester would provide, or the like) and its weights. There's no proof that this would work, but I also see no reason why simply scaling up current techniques and/or making them faster wouldn't allow LLMs to generate feedback like this which is just as useful as human user feedback.
Yes, Adobe Premier is a few million lines of code, and LLMs can create millions of lines of code within weeks. However, Adobe premier wasn't one-shotted by a person, and an agent can't one-shot it either. The only way to build an excellent enterprise tool is to build a shitty enterprise tool, get feedback, and improve it with time. In startup speak, this private feedback is referred to as 'moat'. LLMs make this loop faster, but you can't skip it.
The value in the text/images/media/any content that form the feedback comes from how modifying the software in a way guided by the feedback improves the software as judged by the people who gave the feedback (and people like them), not in the fact that content was generated by humans using the software and expressing their opinions. Generating the feedback that way through actual humans who used the software is a great way to ensure that that the feedback is valuable in this way, but I don't see why a sufficiently advanced LLM (or LLM-based tool) couldn't generate that feedback with just as much value (i.e. modifying the software in a way guided by that LLM-generated feedback improves the software as judged by the people who would have given the feedback, i.e. target audience), just by predicting the next word. And then modify the software through iterations until the feedback crosses some threshold of asking for small enough changes or something. I don't think this would be considered a "one-shot," but it certainly seems like it would require almost as little investment in human effort. It's just that the LLM-based tools don't seem sufficiently advanced (or perhaps they're not sufficiently fast?).
Lol. What were you doing on the platform?
Evidently, trying to produce videos that OpenAI disapproved of. More precisely, IIRC, I was inspired by some videos I saw on Sora where, apparently through some clever prompting and/or iterations, the user had managed to generate and share a video of a woman doing yoga, shown from suggestive angles. I was experimenting with doing the same when I got the ban email. Grok Imagine has lines as well, as I alluded to before, but it places its lines very very far from where Sora did.
I wouldn't quite put it to the level of Shakespeare words. Fortnight wasn't a common everyday word when I was a kid in the 90s, but most high schoolers would have been familiar with the word and what it meant, if memory serves, likely due to its usage in history class when reading documents relating to Revolutionary War or the Civil War. Which were only about 150-250 years ago, not 400 like Shakespeare.
I do wonder if kids these days know that Fortnite is a play on that word and what that word means, or if they just think it's some nonsense made-up word.
Wow, that phenomenon seems really reminiscent of how "sexual assault" became a catchall term to be used when the speaker wants to create the connotation of forcible rape in contexts where the reality is some sort of harassment of a sexual nature. As well as how "sex trafficking" became a catchall term to be used when the speaker wants to create the connotation of kidnapping women into sexual slavery in contexts where any level of prostitution took place. I've come to really dislike these overt attempts to engineer language for the purpose of hiding covert attempts to manipulate others into believing things that one finds useful for others to believe and personally always just call them "child porn" and "revenge porn."
On the other hand, most of the effort with a commercially viable video generation product is in the product engineering, not the model itself. That's asking a lot lot of effort from OpenAI in an area they are not best equipped to beat seasoned product engineering teams at.
It seems evident by their actions that engineers at OpenAI lacked the ability or capacity to use GPT5 to cost-effectively write an Adobe Premiere competitor but with Sora-integration, with UI that's just as good, just as intuitive and user-friendly for longtime video editing professionals, just as stable and responsive, etc.
I wonder if/when AI companies will reach the point where they could just do that for any arbitrary existing software. At what point could one of these companies just instruct the AI to generate an Excel clone that has perfect backwards compatibility to MS Excel, but also has their AI integrated in, and consistently get out a viable software product as the result? What about a Windows clone that has perfect compatibility to all Windows-compatible software, but also has their AI integrated in? What about an Oblivion clone that has perfect compatibility to all existing save files, but also doesn't require major QOL overhaul and performance mods to make enjoyable and also has their AI integrated in?
It appears to me that these issues could probably be solvable. Disallowing generative editing of user-uploaded images seems like a no-brainer.
Grok Imagine has indeed implemented something like this, where uploaded images of real humans become extremely difficult to edit without triggering a censor, and I think videos might be right out. Unfortunately, even this gimped censor is severely limiting, and a full-on prevention of generative editing or animation of user-uploaded images would take away like 90% of the use cases for image/video generative AI. Since so much of using gen AI to produce images and videos is about trial-and-error and iterations of manual edits -> AI generation building on it -> manual edits of AI generations -> AI generation building on it -> etc., including using multiple different non-interoperable AI tools (e.g. generate original image in Midjourney, edit it locally using Krita and Stable Diffusion, then upload it to Grok AI to animate), lack of ability to take arbitrary image input would leave it as only the origin point for the workflow, which doesn't amount to much, or just simple time-waster slot machine generations.
Personally, I do not like the notion that it's possible to arrange pixels in an illegal way that doesn't involve some other independently illegal action as a causal factor and hope that attempts to make it so fail horribly. Unfortunately, I'm not hopeful, as it seems to me that support for free speech and free thought isn't very high right now in the USA. This is also why I'm still holding out hope that video gen on local hardware will become "good enough" that private servers owned by companies like OpenAI, xAI, Google, etc. don't become effective gates for this sort of creative endeavor for the layman (or at least lay enthusiast).
I tried using Sora for about a month at the end of last year, but I had to stop due to getting banned. Grok Imagine wouldn't ban me, so I've been using that instead. My wild guess is that a social media platform based entirely around AI generated videos like Sora can only exist in a sustainable way if it's explicitly for erotic/pornographic material - there's simply not enough demand for creating or viewing AI generated videos that aren't in that category to get enough users and views to pay for the generations.
I've mixed feelings, since Sora was clearly much better and more flexible than Grok imagine, and so I would've loved to see that develop further, but, at the same time, the lower censorship in Grok and XAI's general attitude towards censorship versus OpenAI's makes me think improvements in Grok is more likely to bear fruit. Of course, without Sora around, XAI has less reason to improve Grok... And Grok Imagine is also still censored, which isn't great, but it's the least worst, at least. In the long run, I'd hope that local video generation will be "good enough," but that'll probably require a world where dual 5090s with 64gb VRAM is considered a quaint little living room computer for sending emails and running old games at a tolerable 25fps, which I'm guessing is within 2 decades.
Do they genuinely think that a world where normalizing blockades of international shipping is one that they would actually want to live in?
The answer to your question is No, because the answer to the first 4 words of your question is also No.
That just the baseline reality that everyone already baked into their model of the world. The question is, why do some ideologies appear to have more of this kind of abuser in leadership roles than others? Of course, that's either a trivial question (answer: because what things appear to you is primarily determined by your biases, rather than underlying reality) or a loaded question (i.e. the question is implying that this "appearance" correlates with reality), and so the "clean" version of that question would be: "Do some ideologies actually have more of this kind of abuser in leadership roles than others, and if so, is there something about the psychology of these ideologies that leads a difference in prevalence of this?"
Which is an interesting question to at least speculate about, though any actual conclusions would be completely unwarranted.
- Prev
- Next

Does this come from trying to read Shakespeare? I feel like Shakespeare is best enjoyed in performance form, and trying to enjoy his works from reading them is like trying to enjoy The Godfather from reading the script. There's enjoyment to be had, likely, but there's a lot to the experience that's missing, because the target audience for the script wasn't readers, but rather actors and directors and such, for the purpose of informing them on what to perform for viewers. Personally, my favorite Shakespeare experience is the 90s film Twelfth Night starring Ethan Hawke and Helena Bonham Carter.
More options
Context Copy link