Ukrianian corruption has no relevance in the context of the Russian invasion. Ukraine could be a hellhole, and it still doesn’t deserve to be invaded. Bringing up “Ukrainian corruption” is repeating Russian propaganda in an attempt to justify the invasion.
I disagree. I think Ukrainian corruption is quite relevant when it comes to how and how much 3rd parties might want to fund Ukraine's defenses. Just because Ukraine doesn't deserve to be invaded doesn't mean that it deserves to be helped in its defense by anyone else. The world isn't just, and it's also not anyone's responsibility to make it just. One can certainly argue that we ought to make it more just by making sure Russia's unjust invasion doesn't get rewarded, but taking that point for granted amounts to treating it like a religious crusade rather than politics and war.
More broadly, jumping to conclusions about motives and implications based on simple statements of judgment is something I'd rather see much less of in all contexts, and certainly on The Motte, and if you believe that doing so is correct, I believe you are correct that you'd be better off just ignoring this site.
I don't intend this to sound condescending, but this parallel has been so obvious to me for probably the better part of a decade by now, that I'm surprised that someone on TheMotte would only notice it now. Though perhaps it actually speaks ill of me and my hobby of paying attention to the culture wars around popular media that I noticed the parallels so early and found it so obvious.
The all-woman Ghostbusters remake came out in 2016, almost a full decade ago, and that was one of the earlier big examples of the whole "we didn't fail the audience; the sexist, misogynistic audience failed us by not giving us money to spend 2 hours watching our film" narrative being made. That was 2 years after Gamergate, which wasn't quite that specifically, but it was a major flashpoint in video game culture where major video game journalists, devs, and commentators were explicitly telling their customers that their tastes were wrong, and that they had a responsibility to submit to the enlightened, correct tastes of the then "social justice" (equivalent to "woke" today) crowd. This knocked over some dominoes that resulted in many video games designed to appeal to that SocJus crowd being released 5-10 years later, i.e. the last 5 years. Examples of these include failures like Concord or Suicide Squad: Kill the Justice League from last year, as well as successes like The Last of Us: Part 2 and God of War: Ragnarok (I suspect that it's not a coincidence that these successes were both sequels to hugely popular games that built on a strong base).
In film, besides 2016's Ghostbusters, 2017's The Last Jedi, as well as most Star Wars works that followed, and 2019's Captain Marvel, as well as most Marvel movies that followed, were major examples of this phenomenon. And though many of these films did fine or even great in the box office, they had plenty of controversy around more old-school fans reacting negatively to various plot points and characterizations, and then being called bigots in return both by filmmakers and commentators. There were smaller examples as well, such as Terminator: Dark Fate or the Charlie's Angels remake-remake, both of which bombed in 2019.
A big part of it, I think, is that SocJus mentality, of all of reality being dominated by power differentials, and as such, each individual of [demographic] is necessarily disadvantaged compared to each individual of [some other demographic]. This means that if that individual of [demographic] fails or just doesn't succeed as much as they imagine an individual of [some other demographic] would have, then their failure is due to the bigoted society that created these power dynamics that made them disadvantaged, rather than due to that individual's own flaws. This, of course, is how millionaire stars can claim to be lacking in "privilege" - the claim isn't that they're not wildly successful, but rather that they aren't as wildly successful as an equivalent person of [some other demographic] would have been. Also of course, this is completely unfalsifiable.
And if you approach things with that mindset, that belonging to [demographic] means that any failure is due to the structural bigotry that reinforces the power dynamics of society, then naturally, when your film/video game/electoral candidate fails, you're going to blame structural bigotry. I.e. your audience, the gamers, the voters.
Also of course, if you just blame external factors, it hampers your ability to self-improve. But you can still succeed as long as all those external factors submit to your demands; if calling someone racist can get them to buy your game, then that's just as good as just making a better game. In practice, this doesn't really work. But the people making these decisions seem to be in echo chambers where calling people racist does get them to submit to their demands. And while everyone lives in echo chambers to some extent, the left/progressive/Democratic crowd has been very openly and very explicitly calling for strengthening the boundaries of their own echo chambers through censorship of opposing voices. Which leads them to model the general audience very poorly. Which costs you money. If you have a big bankroll, you can keep going with that for a while, but eventually, that money runs out. I think 2024 was a big year for when many of these decision makers finally recognized that they were able to see the bottom of the barrel of money they've been feeding their projects. In video games, we might see an actual closure of Ubisoft this year, depending on how their next Assassin's Creed game - one that had direct inspiration from the BLM riots of 2020 according to a developer, IIRC - does, after the mediocre reception of their Star Wars game last year.
I wonder if the Democrats will eventually have a moment when the stark reality of their failures simply can't be tolerated anymore, resulting in a change in tact. I was hopeful right after the election last year, but most signs since then have made me pessimistic. I just hope it comes sooner than later, because, as bad as SocJus is, I fully expect Republicans to be just as bad if they find that they have nearly unchecked power without a strong opposition party.
Does a union work rule have any jurisdiction in how one controls one's own electronics in one's own home? The only way I could see that applying is if someone rents from a landlord that has some exclusive deal with some union to maintain the property, which seems possible but I have no idea how common that is. I was also under the impression that even renters have substantial rights over their homes that override the desires of the landlord. In Massachusetts, I know that landlords aren't allowed to enter the premises without the renter's permission, for instance.
IANAL, but I also can't see such a rule being enforceable or legal, given how much of a personal and public safety issue that unpowered smoke alarms can cause.
Maybe we should distinguish our terminology somewhat - perhaps the Doom demons are intelligent but not sapient? And I previously used the word 'intelligence' synonymously with sapience, rather than with this lower level of simulated agency?
I'd agree with this characterization. I also would say that, personally, I don't associate intelligence with deserving rights. That opens a door that I'd rather keep quite shut. I don't think these terms are completely clear cut, but it seems fair enough to say sapience, sentience, consciousness, ability to suffer, are things id associate with deserving rights.
I'd say that, if your laptop is running Doom with its artificial intelligence runtimes controlling the way an imp moves around on your screen, then it's displaying intelligence. Likewise if it's running a local LLM to produce text based on text-based inputs. I see intelligence as the ability to solve complex problems, such as making brown pixels appear as a demon attempting to murder you or producing text that appears as a conversation partner in response to a prompt. Where exactly that line for "complex" is drawn is admittedly pretty vague and correctly controversial, though.
I think many parts of AI hypotheticals are interesting, including whether or not they're conscious or have agency and whether or not they're dangerous. But in terms of real-world applications of modern and upcoming AI, I think the danger/use part is far more interesting than the consciousness part. I expect that, for the near future, it's highly unlikely that we'll get AI that we can say with any meaningful level of confidence is "conscious" or has subjective experience or whatever, but it's almost certain that we'll get AI that's useful/dangerous (arguably, this has already happened). I think there's a high chance we'll get AI that appears to have agency in the near future as well.
Also, about the affirmative action argument, yes, this is why it's a super questionable solution. Can't you imagine some world though in which the de facto violations of meritocracy are so bad that a de jure violation might actually end up improving the de facto situation? I definitely agree that it's usually not a good idea to use bad means for what you think will be good ends.
One major problem with this situation right now is that, for as much as we can certainly imagine that world, the organizations and individuals that society has relied on to check if our real world is at all similar to that imagined world have so destroyed their credibility that we can't actually trust their claims that they verified that our world is similar to that imagined world. It may be possible to regain that credibility within my lifetime, but I'm skeptical that that will happen, and I'm pretty sure it won't happen in any time frame meaningfully shorter than that.
Which is to say, the very notion that there are good ends to be pursued here is contingent upon something that we have no way of verifying is true. That doesn't mean it's not true, but it does mean that it should be taken about as seriously as people claiming that some Jewish conspiracy is what's making Jews so successful or whatever.
I find this argument strange, because being able to kill me is not evidence of a machine being conscious or intelligent.
Thus I'm going to give the chad "yes". Maybe one day I get killed by a robot, and maybe that robot is not conscious and has no self-awareness. That it killed me proves nothing.
It seems to me that you pretty much agree with the commenter you're responding to, that it simply doesn't matter if the AI has consciousness, qualia, or self-awareness. Intelligence, though, is something else. And whether or not the AI has something akin to internal, subjective experience, if it's intelligent enough, then it's both impressive and potentially dangerous. And that's the part that matters.
There's something funny in a through-the-looking-glass sort of way about the way the term "woke" has evolved the past decade or so. In entering the mainstream, it became a way to describe old-school racism/sexism/bigotry/etc., just in a "progressive" direction, using a term that sounded nice, in a way that would be palatable to people who liked to think of themselves as against old-school racism/sexism/bigotry/etc. But it quickly became identified with that underlying thing it was describing, and now it's being used to describe the old old-school racism/sexism/bigotry/etc. in the traditional direction, as a way to denigrate it. Perhaps because terms like "racism," "sexism," etc. lost their edge due to constantly being used to describe completely innocuous and often virtuous things, while terms like "woke" kept being used to describe things that were traditionally called "racist," "sexist," etc.
I've quoted Shakespeare before, that a rose by any other name smells just as sweet, and shit by any other name stinks just as foul. Observing this real-time shuffling of words around meanings has been fascinating. It seems that activists who helped to popularize the term "woke" have a real, good faith belief that changing the words we use really, truly, actually changes the underlying thing in some real way - they get high off their own supply, so to speak.
Other times, your foot gets infected, and if you don't cut off everything from the knee down, your entire body shuts down. And other times, some cells in your breast starts reproducing uncontrollably, and if you don't cut off most of the breast, again, your entire body shuts down. The pain and loss of those healthy cells - a majority of the cells that were cut off were probably healthy! - are real and shouldn't be downplayed. But sometimes it's the least worst option.
I'd say that infections that spread toxicity through the rest of the body or a cancerous growth that grows uncontrollably in a way that crowds out and kills the healthy cells are better metaphors for this situation in academia than a prefrontal cortex sometimes not making the best decisions.
I don't see how this responds to anything I wrote. The idea that a house elf in Harry Potter or an elevator operator in Brave New World is "downtrodden" due to "power inequalities" is, in itself a rejection of transhumanism. It just seems to me like you're just trying to have your cake and eat it too.
To get back to the topic of this scifi gender-dysphoria-B-gone pill vs sex-change-down-to-the-molecular-level surgery, I still contend that a desire for transhumanism can't support the latter without supporting the former. There could be other reasons for supporting the latter without supporting the former, but that'd be something like believing in the sanctity of the mind over body. Which many people do believe in, on the basis of the eternal soul. But which also isn't convincing to people who don't believe in souls.
Such is the trap of value systems, you can't really live without one and any sort of conscious change to yours may run against your current one, like in the tale of Gandhi and the pacifism-reducing pills. Even those mindsets that the orthodox consider "deviation" and "degeneracy" are, by definition, part of the human condition. There is nothing in the Codex of Transhumanism that says we ought to keep our minds so open that anything can fall in.
Sure, and that's not an argument against what I pointed out. Again, there's no particular reason to believe that those societies would be "failure modes" of a transhuman future.
Like, the whole house elf analogy just reminds me of Brave New World, which was a novel that played around with that concept quite a bit more than in Harry Potter. I don't see any good reason why a transhumanist wouldn't consider that society a perfectly cromulent one. And personally, I would see that world as a very good one to live in, if not to aim for achieving in our own.
But calling those "failure modes" seems, in itself, to be against transhumanism. You or I with our value systems shaped by our human society and human biology and psychology might find those to be failures, but who knows how someone who's transcended their humanity would judge it? Fundamentally, there's no way to tell, and so if we want to keep in the spirit of transhumanism, we ought to keep our minds open.
I'm also skeptical that the typical person encountering this hypothetical is considering the things that you're positing, not even implicitly or subconsciously.
Similarly I would rather be more attractive than be able to tolerate the fruits of being less attractive; would rather be able to achieve my goals with less work than be able to work more, etc.
These don't seem similar, though. If we applied the framework of these things to being trans, it would mean that a transwoman isn't someone who simply feels like a woman and thus wants to change his body to match it, it's someone whose goal is for other humans to treat him like a woman (analogous to your 1st example) or whose goal is to physically appear as a woman (analogous to your 2nd). Those are different things.
It's also not clear to me how it's more freedom to change one's body than to change one's mind. From my experience, changing one's body quite drastically is often quite easy, but changing one's mind even a little is often quite difficult. It's fundamentally difficult to compare the two, but I'd argue that being able to manipulate our minds as freely as we manipulate our physical bodies is more transhumanist, not less, than just wanting to manipulate our physical bodies to match our minds. I think, to most people, a non-humanoid like a cartoon cat or non-android robot that seems to think and behave like a human is "more human" in some sense than something that appears biologically like a human but seems to think and behave in a way that's completely foreign to humans. At the extremes, I think that people consider ChatGPT "more human" in some way than an android sex doll. So it seems to me that if we want to transcend our humanity, having the freedom to manipulate our minds as easily as taking a pill is at least as significant as having the freedom to manipulate our physical bodies to be the other sex.
In general, if someone is a good writer, then we would prefer him to write more rather than less. More of a good thing is good! Saying that you prefer writing to always be as short as possible is a bit like saying that the best sandwich is the one with the least meat on it. We would have to assume that such a person is not much of a meat eater to begin with.
I'd consider it more akin to saying that a good pitcher is one who faces as few batters as possible per inning. It's not always true, but it's a very good indicator.
Maybe they used a mix cocktail so they'd never know whose it is?
A "woman" could centrally be an "adult human of the sex that produces large gametes", and we could still allow for stretched usages like calling a particular type of game piece in a board game a "woman", or granting trans women the status of honorary "women."
It's not something we can know for sure without checking, but my suspicion is that "we'll acknowledge that transwomen are women in a way similar to the Queen in chess being an honorary woman" would be welcomed by people insisting that transwomen are women. For instance, almost no one would bat an eye at someone using "it" to refer to the chess piece, implying that the speaker sees it as an object, whereas TRAs would tend to object to someone using "he" to refer to a transwoman (one who doesn't idioysncratically use masculine pronouns, anyway), implying that the speaker sees him as a man who identifies as a woman.
Besides that, of course there are a whole host of demands about what claiming that transwomen are women imply about rights and privileges transwomen are entitled to, with respect to woman-only spaces that doesn't apply to chess pieces. Since chess pieces aren't sentient or have will, the parallels break down, but to use the other example, when a child accidentally rips open his teddy bear, we don't treat it as if he just murdered his pet. When he doesn't feed it while keeping it constrained in his bedroom, we don't treat it as if he's being neglectful of or cruel to his pet.
I just don't think "transwomen are women by stretching the definition of women, but they're men in every other way we treat them in society" is a position that many people would support, certainly not among TRAs. The central conflict here has little to do with word games.
So if the AI were literally accessing the video through that link, 3:00/2x is indeed the fastest it would be able to provide the transcript.
You can make YouTube videos go at arbitrarily high speeds just using a Chrome extension. I actually had an issue recently where an extension was causing the videos to default to 10x speed, which was both amusing and annoying. In any case, anyone with a link to a YouTube video has the ability to just download the video using basic non-AI tools, so the AI wouldn't be limited by the UI that YouTube presents human users.
I don't see how it could possibly generate subtitles instantly on the fly for a music video with a runtime of three minutes?
I think your explanation about the AI lying and confidently misrepresenting evidence in this case is almost certainly true. But I don't see how the runtime of the music video would matter for this. If the AI were analyzing the music video - which I don't think it did - it would be analyzing the bits that make up a video file after downloading it from wherever it is, in which case it just needs to process the bits that make up the file, and the speed of that would be dependent on many factors, but certainly not limited by how long the video is. A human might be limited to maybe half the time at the shortest if they watched the video at 2x speed, but I don't see any reason why an AI couldn't transcribe, say, all recorded audio in human history within a second, just by going through the bits fast enough.
Insofar as this is possible, (I believe Searle disagrees that it is), then the room does speak Chinese because it's just a brain.
I'm not sure how one would argue that it's not possible. Is the contention that there's something ineffable happening in neurons that fundamentally can't be copied via a larger model? That seems isomorphic to a "god of the gaps" argument to me.
We don't know the "ground truth" either, though. All the information that we parse, such as touching the Earth or seeing the moon in the sky or through a telescope are basically hallucinations created by our brains based on the sensory input that we take in through detection mechanisms in our cells. We have to trust that the qualia that we experience are somewhat accurate representations of the "ground truth." Our experience is such that we perceive reality accurately enough such that we can keep surviving both as individuals and as a species, but who knows just how accurate that really is?
LLMs are certainly far more limited compared to us in the variety of sensory input they can take in, or how often it can update itself permanently based on that sensory input, and the difference in quantity is probably large enough to have a quality of its own.
Where I agree with the idea behind the Chinese Room is exactly that. Yes, the agent can answer questions about the things it’s supposed to be able to answer questions about well enough to fool an onlooker asking questions about the subject it’s been trained to answer. But if you took the same agent and got it off script in some way — if you stopped asking about the Chinese literature it was trained to answer questions about and started asking questions about Chinese politics or the weather or the Kansas City Chiefs, an agent with no agency that doesn’t actually have a mental model of what the characters it’s matching actually mean will be unable to adapt.
Perhaps I'm not as familiar with the Chinese Room experiment as I thought I was. I thought the Chinese Room posited that the room contained mappings for literally every single thing that could be input in Chinese, such that there was literally nothing a Chinese person outside the room could state such that a response indicated a lack of understanding of Chinese? If the Chinese Room posits that the mappings contained in the room are limited, then that does change things, but then I also believe it's not such a useful thought experiment.
I personally don't think "understanding," at least the way we humans understand (heh) it, is a necessary component of intelligence. I'm comfortable with calling the software that underlies the behavior of imps in Doom as "enemy artificial intelligence," even though I'm pretty sure there's no "understanding" going on in my 486 Thinkpad laptop that's causing the blobs of brown pixels to move on the screen in a certain way based on what my player character is doing, for instance. If it talks like a duck and walks like a duck and is otherwise completely indistinguishable from a duck in every way that I can measure, then I'll call it a duck.
Your tests have the exact same "problem" as the Turing Test, though. There's no way to tell if the bot actually "understands" Japanese just because it is able to produce Japanese words that are understandable to Japanese people after interacting with Japanese people a bunch. There's no way to tell if the bot actually "understands" physics just because it responds to an equation with symbols that a learned physicist would judge as "correct" after interacting with a bunch of physics textbooks or lectures or whatever. It could just be updating the mappings within its Chinese room.
One might say that updating the mappings in its Chinese room is essentially another way of describing "understanding." In which case the Turing Test also qualifies; if the chatbot is able to update its mappings during its conversation with you such that it appears to you as indistinguishable from a human, then that's equivalent to a bot updating its mappings through its conversations with Japanese people such that it appears to Japanese people as someone who understands Japanese.
If you're claiming that random crying fits are caused by hormones, it just seems to me that you're just saying that the part at the beginning of that paragraph:
women supposedly being men’s equal in mental strength, emotional resilience, seriousness as adults
is actually not true, for a biological reason.

I think SSCReader explained this pretty well. Basically, if all you see besides corruption and zero sum power jockeying - which are about as universal to humanity as death and taxes by my lights - are weird ideological crusades, well, there's a good reason for that. It was way back in 2014 that I noticed that SocJus was clearly a religion that was intrinsically faith-based, and I'm quite sure I was nowhere near the first to notice that. The past 10 years has only made me more sure of this, and it has also convinced me that there's some veracity to the theory that the typical human mind has a god-shaped hole in it that necessarily gets filled by something.
More options
Context Copy link