@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

Optimistically, the academics leaving the USA are the ones most ideologically captured, such that their contributions to knowledge production is easily replaceable or even a net negative, as is the case for much of what is purportedly being cut by DOGE. Given how academia has been pushing for the model of uplifting people by putting them into these institutions versus the the model of putting people into these institutions based on their ability to contribute to knowledge production for a couple of generations now, it wouldn't surprise me if even a solid majority of academics could leave the USA and leave the USA's academia better off for it.

Pessimistically, there's enough damage to funding in even in the most productive portions of academia, such that plenty of the academics leaving the USA really do create a "brain drain." I'd guess that academics doing actual good knowledge production are most likely to have the resources and options to pick up their lives and move to another continent, after all.

It really speaks to the immense wealth and prosperity of the western world that academic institutions are able to support so many unproductive and anti-productive academics; is it worth it to get rid of many of those, even at the cost of some loss of the productive ones? Or do we accept those as the cost for maximizing the amount of actual productive academics? The shape of the data probably matters a lot for whatever conclusion one draws. If we're looking at a 10-90 proportion of productive-un/anti-productive academics, and we can cut 50% of the latter while cutting 1% of the former, that sounds like that'd be worth it, whereas if cutting 1% of the latter results in cutting 50% of the former, that probably isn't.

Which then takes us a step back to the fact that we no longer have any credible institutions to tell us what the data looks like. The past decade has seen mainstream journalism outlets constantly discrediting themselves, especially with respect to politics surrounding Trump and his allies, and non-mainstream ones don't have a great track record by my lights, either. So I guess we'll see.

In terms of scientific research of the sort that would make USA stronger relative to other countries, like rocketry or nuclear physics in the past, it seems to me that AI is the most relevant field, where I perceive USA as still being most attractive for AI researchers. At least in the private sector, where a lot of the developments seem to be taking place. The part about that that worries me the most is the actual hardware the AI runs on, which basically universally are produced elsewhere, which is a mostly separate issue from the brain drain.

I don't intend this to sound condescending, but this parallel has been so obvious to me for probably the better part of a decade by now, that I'm surprised that someone on TheMotte would only notice it now. Though perhaps it actually speaks ill of me and my hobby of paying attention to the culture wars around popular media that I noticed the parallels so early and found it so obvious.

The all-woman Ghostbusters remake came out in 2016, almost a full decade ago, and that was one of the earlier big examples of the whole "we didn't fail the audience; the sexist, misogynistic audience failed us by not giving us money to spend 2 hours watching our film" narrative being made. That was 2 years after Gamergate, which wasn't quite that specifically, but it was a major flashpoint in video game culture where major video game journalists, devs, and commentators were explicitly telling their customers that their tastes were wrong, and that they had a responsibility to submit to the enlightened, correct tastes of the then "social justice" (equivalent to "woke" today) crowd. This knocked over some dominoes that resulted in many video games designed to appeal to that SocJus crowd being released 5-10 years later, i.e. the last 5 years. Examples of these include failures like Concord or Suicide Squad: Kill the Justice League from last year, as well as successes like The Last of Us: Part 2 and God of War: Ragnarok (I suspect that it's not a coincidence that these successes were both sequels to hugely popular games that built on a strong base).

In film, besides 2016's Ghostbusters, 2017's The Last Jedi, as well as most Star Wars works that followed, and 2019's Captain Marvel, as well as most Marvel movies that followed, were major examples of this phenomenon. And though many of these films did fine or even great in the box office, they had plenty of controversy around more old-school fans reacting negatively to various plot points and characterizations, and then being called bigots in return both by filmmakers and commentators. There were smaller examples as well, such as Terminator: Dark Fate or the Charlie's Angels remake-remake, both of which bombed in 2019.

A big part of it, I think, is that SocJus mentality, of all of reality being dominated by power differentials, and as such, each individual of [demographic] is necessarily disadvantaged compared to each individual of [some other demographic]. This means that if that individual of [demographic] fails or just doesn't succeed as much as they imagine an individual of [some other demographic] would have, then their failure is due to the bigoted society that created these power dynamics that made them disadvantaged, rather than due to that individual's own flaws. This, of course, is how millionaire stars can claim to be lacking in "privilege" - the claim isn't that they're not wildly successful, but rather that they aren't as wildly successful as an equivalent person of [some other demographic] would have been. Also of course, this is completely unfalsifiable.

And if you approach things with that mindset, that belonging to [demographic] means that any failure is due to the structural bigotry that reinforces the power dynamics of society, then naturally, when your film/video game/electoral candidate fails, you're going to blame structural bigotry. I.e. your audience, the gamers, the voters.

Also of course, if you just blame external factors, it hampers your ability to self-improve. But you can still succeed as long as all those external factors submit to your demands; if calling someone racist can get them to buy your game, then that's just as good as just making a better game. In practice, this doesn't really work. But the people making these decisions seem to be in echo chambers where calling people racist does get them to submit to their demands. And while everyone lives in echo chambers to some extent, the left/progressive/Democratic crowd has been very openly and very explicitly calling for strengthening the boundaries of their own echo chambers through censorship of opposing voices. Which leads them to model the general audience very poorly. Which costs you money. If you have a big bankroll, you can keep going with that for a while, but eventually, that money runs out. I think 2024 was a big year for when many of these decision makers finally recognized that they were able to see the bottom of the barrel of money they've been feeding their projects. In video games, we might see an actual closure of Ubisoft this year, depending on how their next Assassin's Creed game - one that had direct inspiration from the BLM riots of 2020 according to a developer, IIRC - does, after the mediocre reception of their Star Wars game last year.

I wonder if the Democrats will eventually have a moment when the stark reality of their failures simply can't be tolerated anymore, resulting in a change in tact. I was hopeful right after the election last year, but most signs since then have made me pessimistic. I just hope it comes sooner than later, because, as bad as SocJus is, I fully expect Republicans to be just as bad if they find that they have nearly unchecked power without a strong opposition party.

I've been pretty obsessively playing around with AI image generation the last 3 or so weeks, and after learning what I have in that time, it's struck me how the culture war arguments seem to miss the contours of the actual phenomenon (i.e. like every other culture war issue). The impression that I got from just observing the culture war was that the primary use of these tools was "prompt engineering," i.e. experimenting with and coming up with the right sets of prompts and settings and seeds in order to get an image one wants. This is, of course, how many/most of the most famous examples are generated, because that's how you demonstrate the actual ability of the AI tool.

So I installed Stable Diffusion on my PC and started generating some paintings of big booba Victorian women. Ran into predictable issues with weird composition, deformities, and inaccuracies, but I figured that I could fix these by getting better at "prompt engineering." So I looked at some resources online to see how people actually got better at this. On top of that, I didn't want to just stick to making generic pictures of beautiful Victorian women, or of any sort of beautiful women; I wanted to try making fanart of specific waifus characters doing specific things (as surprising as it may be, this is not a euphemism - more because of a lack of ambition than lack of desire) in specific settings shot in specific angles and specific styles.

And from digging into the resources, I discovered a couple of important methods to accomplish something like this. First was training the model further for specific characters or things, which I decided not to touch for the moment. Second was in-painting, which is just the very basic concept of doing IMG2IMG on a specific subset of pixels on the image. (There's also out-painting which is just canvas expansion + noise + in-painting). "Prompt engineering" was involved to some extent, but the info I read on this was very basic and sparse; at this point, whatever techniques that are there seem pretty minor, not much more sophisticated than the famous "append 'trending on Artstation' to the prompt" tip.

So I started going ahead using initial prompts to generate some crude image, then using IMG2IMG with in-painting to get to the final specific fanart I wanted to make. And the more I worked on this, the more I realized that this is where the bulk of the actual "work" takes place when it comes to making AI images. If you want to frame a shot a certain way and feature specific characters doing specific things in specific places, you need to follow an iterative process of SD-generation, Photoshop edit, in-painting-SD-generation, Photoshop edit, and so on until the final desired image is produced.

I'm largely agnostic and ambivalent on the question of whether AI generated images are Art, or if one is being creative by creating AI generated images. I don't think it really matters; what matters to me is if I can create images that I want to create. But in the culture war, I think the point of comparison has to be between someone drawing from scratch (even if using digital tools like tablets and Photoshop) and someone using AI to iteratively select parts of an image to edit in order to get to what they want. Not someone using AI to punch in the right settings (which can also be argued to be an Art).

The closest analogue I could think of was making a collage by cutting out magazines or picture books and gluing them together in some way that meaningfully reflects the creator's vision. Except instead of rearranging pre-existing works of art, I'm rearranging images generated based on the training done by StabilityAI (or perhaps, the opposite; I'm generating images and then rearranging them). Is collage-making Art? Again, I don't know and I don't care, but the question about AI "art" is a very similar question.

My own personal drawing/illustration skills are quite low; I imagine a typical grade schooler can draw about as well as I can. At many steps along the process of the above iteration, I found myself thinking, "If only I had some meaningful illustration skills; fixing this would be so much easier" as I ran into various issues trying to make a part of an image look just right. I realized that if I actually were a trained illustrator, my ability to exploit this AI tool to generate high quality images would be improved several times over.

And this raises more blurry lines about AI-generated images being Art. At my own skill level, running my drawing through IMG2IMG to get something good is essentially like asking the AI to use my drawing as a loose guide. To say that the image is Artwork that 07mk created would be begging the question, and I would hesitate to take credit as the author of the image. But at the skill level of a professional illustrator, his AI-generated image might look virtually identical to something he created without AI, except it has a few extra details that the artist himself needed the AI to fill in. If I'm willing to say that his non-AI generated images are art, I would find it hard to justify calling the AI-generated one not art.

Based on my experience the past few weeks, my prediction would be that there will be broadly 3 groups in the future in this realm: the pure no-AI Artists, the cyborgs who are skilled Artists using AI to aid them along the process, and people like me, the AI-software operators who aren't skilled artists in any non-AI sense. Furthermore, I think that 2nd group is likely to be the most successful. I think the 1st group will fall into its own niche of pure non-AI art, and it will probably remain the most prestigious and also remain quite populous, but still lose a lot of people to the 2nd group as the leverage afforded to an actually skilled Artist by these tools is significant.

Random thoughts:

  • I didn't really touch on customizing the models to be able to consistently represent specific characters, things, styles, etc. which is a whole other thing unto itself. This seems to be a whole vibrant community unto itself, and I know very little of it first hand. But this raises another aspect of AI-generated images being Art or not - is it Art the technique of finding the right balance when merging different models or of picking the right training images and training settings to create a model that is capable of generating the types of pictures you want? I would actually lean towards Yes in this, but that may be just because there's still a bit of a mystical haze around it to me from lack of experience. Either way, the question of AI-generated images being Art or not should be that question, not whether or not picking the right prompts and settings and seed is.

  • I've read artists mention training models on their characters in order to aid them in generating images more quickly for comic books they're working on. Given that speed matters for things like this, this is one "cyborg" method a skilled Artist could use to increase the quantity or quality of their output (either by reducing the time required for each image or increasing the time the Artist can use to finalize the image compared to doing it from scratch).

  • For generating waifus, NovelAI really is far and away the best model, IMHO. I played around a lot with Waifu Diffusion (both 1.2 & 1.3), but getting good looking art out of it - anime or not - was a struggle and inconsistent, while NovelAI did it effortlessly. However, NovelAI is overfitted, making most of their girls have a same-y look. There's also the issue that NovelAI doesn't offer in-painting in their official website, and the only way to use it for in-painting involves pirating their leaked model which I'd prefer not to rely on.

  • I first learned that I could install Stable Diffusion on my PC by stumbling on https://rentry.org/voldy whose guide is quite good. I learned later on that the site is maintained by someone from 4chan, and further that 4chan seems to be where a lot of the innovation and development by the hobbyists is taking place. As someone who hasn't used 4chan much in well over a decade, this was a blast from the past. In retrospect this is obvious, given the combination of nihilism and degeneracy you see in 4chan (I say this only out of love; I maintain to this day that there's no online community that I found more loving and welcoming than 4chan).

  • For random "prompt engineering" tips that I figured out over time - use "iris, contacts" to get nicer eyes. "Shampoo, conditioner" seems to make nice hair with a healthy sheen.

But of course, it takes someone deep down the rabbit hole of intellectualizing how it's different when they do it to completely miss this point.

Perhaps I'm just being arrogant, but there's a real sense of "too clever by a half" in this sort of intellectualizing. Because if you intellectualize it enough, you recognize that all the past racism/sexism/etc. that past societies bought into as the obviously Correct and Morally Right ways to run society were also justified on the basis of intellectualizing, often to the effect that "it's different when we do it." So someone intellectualizing this should recognize that their own intellectualization of the blatant racism/sexism/etc. that they themselves support is them falling right into the exact same pattern as before, rather than escaping from it.

The final season was so bad that, like the Three Eyed Raven traveling back to make things seem retarded, it actually retrospectively killed the rest of the series, people talked about GoT constantly up until the finale, and after it aired the show disappeared from popular discourse. Some of the pullback from obligatory breasts and "here's a scene of sexual perversion explaining what's wrong with [character]" likely stems from a desire to avoid being seen as derivative of GoT or a revulsion at GoT's aesthetic after the fiasco that was the finale.

Hm, how does this square with the works like The Witcher (2019), Rings of Power (2022), or Willow (2022) seemingly (I'm speculating due to only having watched the 1st 2 seasons of The Witcher out of these - I don't recommend even S1 due to S2 retroactively making it a waste of time) trying to ape GoT's aesthetic and stylings in an apparent effort to replicate its success? The Witcher was in production before GoT's self-immolation (though GoT was pretty clearly in the process of pouring gasoline all over itself and looking for matches for multiple years already), but the other two were being produced after GoT was well established as just a pile of ashes. Also, the sexual content in GoT is more associated with when it used to be good, and so it doesn't seem likely to me that the sexual content was specifically the part of GoT that show runners would avoid while trying to ape other parts of it.

People just hated Darwin since he was unabashedly left-wing.

Hard disagree. Darwin had a particular style of bad faith in the way he argued his left-wing positions that made left-wing arguments appear dishonest and manipulative, and that's why I personally was glad he didn't come to this site and stopped interacting with GuessWho once GuessWho revealed that he was Darwin2500 from Reddit.

I wonder if 8 hours of work a day for the 5 workdays managed to become a popular standard due to it cleanly cutting in half the 16 hours a day that most adults are expected to be awake. It's just easy to wrap your head around the idea of cutting up the day into thirds of 8 hours each. I don't know why 5 workdays became standard instead of 6 or 7. Perhaps 7 was out due to the influence of Christianity in most Western nations meaning there had to be 1 day of rest, and perhaps 1 more day on top of that just made sense for giving people more flexibility.

Unjustly? Outraged?

All else being equal, a teacher who is seen by students as an individual human being, rather than as a bureaucrat, will likely be more effective on many dimensions.

This is a popular narrative that people in education, especially teachers, like to push (at least from my experience as a student), and as a result, plenty of former-students (i.e. almost everyone in the West) also seem to believe it, but I'm skeptical. Have we ever done any studies measuring stuff like "how much does a teacher bringing their hobbies into the classroom affects how much students see them as an individual versus a bureaucrat?" or "how does the students' perception of the teacher as an individual versus a bureaucrat affect the effectiveness of the teacher in [important dimensions], whether it be positive or negative, and how much?" or "if a teacher bringing their hobbies into the classroom and that does increase how much students see them as individuals, then does that particular method of increasing how they see the teachers as individuals cause an increase in effectiveness of the teacher in [important dimensions]?"

Given how convenient this narrative is for the teachers who tend to push it - how nice it is that bringing things I like into my workplace also makes me better at my work! - I think there should be a pretty high bar of evidence for this, to rise above the default presumption that it's a narrative that's just too convenient not to believe.

It's one thing to say that, for example, watching MCU movies because they're "in" at the moment doesn't mean you endorse the idea of capitalism, it's quite another to say that your very deliberate modding choices don't at the very least say something about where your lines are.

Sure, those are two different things, but the important thing is that they're both true. Deliberate modding choices don't tell us anything about where your lines are, except strictly within the realm of deliberate modding choices. To extend any implications outward to something else, like one's political opinions or personal ethics or whatever, is something that needs actual external empirical support. One doesn't get to project one's own worldview onto others and then demand that they be held to that standard.

A part of this that hadn't occurred to me until I saw it pointed out is that there seems to be a sort of donation duel between this lady's case and that of Karmelo Anthony, who's a black teen charged with murdering a white teen during a track meet by stabbing him in the heart during a dispute over seating. I think there was a top-level comment here about this incident before, but there was a substantial amount of support on social media for Anthony on racial grounds, including fundraising for his defense. I get the feeling that a lot of the motivation to donate to this lady is by people who feel that the support Anthony has been getting on racial grounds has been unjust, and supporting her is a way of "balancing the scales," as it were. This isn't the instantiation of "if you tell everyone to focus on everyone's race all the time in every interaction, eventually white people will try to play the same game everyone else is encouraged to" that I foresaw, but it sure is a hilarious one.

Now, one conspiracy theory that I hope is hilariously true, is that the guy who recorded this lady was in cahoots with the lady herself and staged the whole thing in order to cash in on the simmering outrage over the Anthony case. But I doubt that anyone involved has the foresight to play that level of 4D chess.

good game writing, like Disco Elysium

This... this is perhaps the single most offensive opinion I've ever read on this forum.

What does it mean to be "transphobic"? Could one not be "transphobic" and still refuse to acknowledge that "trans women are women"? Because I would like to say that I'm not "transphobic" on the basis that I don't think trans people should be denied rights that we accord to others, or that they should be forcibly prevented from dressing like women, or even (if over 18) allowed to surgically alter themselves to match their desired gender identity (perhaps with some reasonable safeguards).

To state a truism, words gain meaning through usage, rather than through some sort of application of logic on first principles. "Transphobia" might have components that imply that it should mean something like "irrational or severe fear/hatred of trans people," but that's not what it actually means. In practice, the people who use the term "transphobia" - and hence the people who most get to define what it means - use it in such a way as to describe people who refuse to acknowledge that "trans women are women" and more generally just disagrees with self-proclaimed trans rights activists on anything trans-related. Obviously that's an imprecise definition, but words tend to have imprecise definitions, and I think, based on observations of self-proclaimed trans rights activists, refusing to acknowledge that "trans women are women" is solidly in the "transphobia" camp.

If, say, someone was going around and gathering a following by literally advocating for the murder of Jews, I think a lot of us would agree that public shaming (at the least) would be appropriate. That means that one must always have some object-level discussion about what people are being cancelled for before one can reasonably argue that any given cancellation is unacceptable. It's hardly a groundbreaking observation, but it's true nonetheless that there must be a line somewhere that would make "cancel culture" type tactics acceptable; we're all just debating where that line is.

This looks like the fallacy of gray to me. Yes, (just about) everyone carves out an exception to free speech when advocating for literal murder is involved, but the advocating for literal murder is one of those things that's close to black and white, with many mostly well understood and mostly agreed-upon boundaries. And for things like the kind of things that fall under the "transphobia" umbrella, it's quite clear which side of those boundaries they lie on. This, I believe, is why so many self-proclaimed TRAs claim they're fighting against "trans genocide," in a way to evoke the affect of crossing that boundary, even as each individual specific example of such "genocide" clearly falls on the other side when examined closely. Self-proclaimed TRAs aren't unique or even unusual in this, though.

I don't see it. I'm not sure how the facts stated in the OP could have been expressed in a more dry and less outraged manner without outright sounding like (the old-school scifi stereotype of) an AI.

Some more heating up in the AI image generation culture wars, with stock image company Getty Images suing Stability AI over alleged copyright violations. Here's Getty Image's full press release:

This week GettyĀ Images commenced legal proceedings in the High Court of Justice in London against Stability AI claiming Stability AI infringed intellectual property rights including copyright in content owned or represented by GettyĀ Images. It is GettyĀ Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by GettyĀ Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.

GettyĀ Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, GettyĀ Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from GettyĀ Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests.

This follows a separate class action lawsuit filed in California by 3 artists against multiple image generation AI companies including Stability AI, Midjourney, and DeviantArt (which is an art sharing site, but which seems to be working on building its own image creation model). According to Artnews, "The plaintiffs claim that these companies have infringed on 17 U.S. Code § 106, exclusive rights in copyrighted works, the Digital Millennium Copyright Act, and are in violation of the Unfair Competition law." It seems to me that these 2 lawsuits are complaining about basically the same thing.

IANAL, and I have little idea of how the courts are likely to rule on this, especially English courts versus American ones. I know there's precedent for data scraping being legal, but those are highly context-dependent, and e.g. the Google Books case was contingent on the product not being a meaningful competitor to the books that were being scanned, which is a harder argument to make about an AI image generator with respect to a stock image service. In my subjective opinion, anything published on the public internet is fair game for training by AI, since others learning from viewing your work is one of the things you necessarily accept when you publish your work for public view on the internet. This includes watermarked sample images of proprietary images that one could buy. However, there's a strong argument to be made for the other side, that there's something qualitatively different about a human using an AI to offload the process of learning from viewing images compared to a human directly learning from viewing images such that the social contract of publishing for public consumption as it exists doesn't account for it and must be amended to include an exception for AI training.

Over the past half year or so, I'm guessing AI image generation is second only to ChatGPT in mainstream attention that has been directed towards AI-related stuff - maybe 3rd after self-driving cars, and so it's unsurprising to me that a culture war has formed around it. But having paid attention to some of AI image generation-related subreddits, I've noticed that the lines still don't really fit with existing culture war lines. There's signs of coalescing against AI image generation in the left, with much of the pushback coming from illustrators who are on the left, such as the comic artist Sarah C Andersen who's one of the 3 artists in that class action lawsuit, and also a sort of leftist desire to protect the jobs of lowly paid illustrators by preventing competition. But that's muddled by the fact that, on Reddit, most people are on the left to begin with, and the folks who are fine with AI image generation tools (by which I mean the current models trained on publicly-available but sometimes copyrighted images) are also heavily on the left, and there are also leftist arguments in favor of the tech for opening up high quality image generation to people with disabilities like aphantasia and others. Gun to my head, I would guess that this trend will continue until it's basically considered Nazism within 2 years to use "unethically trained AI" to create images, but my confidence level in that guess would be close to nil.

From a practical perspective, there's no legislation that can stop people from continuing to use the models that are already built, but depending on the results of these lawsuits, we could see further development in this field slow down quite a bit. I imagine that it can and will be worked around, and restrictions on training data will only delay the technology by a few years, which would mean that what I see as the true complaint from stock image websites and illustrators - unfair competition - wouldn't be addressed, so I would expect this culture war to remain fairly heated for the foreseeable future.

I wonder if this explains the bizarre reaction by some feminists to women being called "females," despite not having a problem with them being labeled as being "female." I've seen a number of weird, twisting explanations for why the former is "dehumanizing" or whatever, but all of them appeared as pure motivated reasoning, especially given that no man I've ever heard of has had any problem with being called "a male." Could very well be indeed pure motivated reasoning, meant to put a veneer of justification over what's, at heart, a pure visceral response.

What I reject is that idea that it doesn't say anything about you.

In the literal sense, nobody takes the other side of this, though. Trivially, if I make deliberate modding choices, then that tells the world that I made those deliberate modding choices. I think so few non-schizophrenic people would disagree with this as to be irrelevant. So claiming that it says something about me is meaningless: of course it does, because every choice I make trivially tells the world that I made that choice.

The point of contention is on the specific claims about what else these choices imply about me or any other generic choice-maker. E.g. if someone modded Stardew Valley to transform some brown pixels to beige ones, it's entirely possible that such a decision was motivated by the modder's deeply held philosophical/political/personal/etc. views which are bigoted, hateful, or whatever, but that can only be supported by additional external information. And merely knowing that this person made such a mod doesn't actually add any information or give us any data from which to construct the truth about that modder's motivations or beliefs or where their lines are. Again, with the exception of the trivial truth that it tells us a lot about the modder's desire to transform certain pixels.

I don't have an opinion on tenure, and I lean on the side of thinking that legislation ought not to interfere with the operations of even public universities to the extent of banning it. Likewise, I'm not sure that legislation ought to specifically compel firings of professors spreading odious views, including "belief that any race, sex, or ethnicity or social, political, or religious belief is inherently superior to any other race, sex, ethnicity, or belief." As described by Aaronson, the professor would have to at least attempt to "compel" this belief, but that could mean something as innocuous as stating it in class and winking, for all I know. I don't know if setting the precedent for such legislative micromanaging causes more harm than good.

But for SB17, as described by Aaronson:

The Texas Senate is considering two other bills this week: SB 17, which would ban all DEI (Diversity, Equity, and Inclusion) programs, offices, and practices at public universities

seems like a very straightforward implementation of the first amendment religion clause. DEI is clearly a religion, a specifically and openly faith-based worldview with certain morals that follow downstream of that faith, and much like how public universities ought not push Christianity or Islam on its faculty or students, it ought not push DEI on them either. The devil's in the details, I suppose, since public universities certainly can make accommodations for religions including having services, and maybe this law might go too far. I would think that such a specific law wouldn't even be required, though, since the Constitution already covers this.

I've only seen Zendaya in the Spiderman movies and Dune, so I can't speak to her acting chops, but I can't disagree more on the idea that people are pretending that she's attractive. IMHO she's easily the most attractive prominent Hollywood actress right now. Maybe Rebecca Ferguson and Gal Gadot might come close? In any case, purely based on looks and ignoring any acting skills, her apparent popularity seems entirely justified to me.

I can't even think of there being any particular hubbub about her race in casting decisions. Even in the super hero movies she was in - a genre notorious for filmmakers accusing fans of bigotry in recent years - her casting as the character-equivalent to the traditionally red-headed white woman Mary-Jane was basically a non-issue, similar to Sam Jackson being Nick Fury.

Can someone remind me what the ā€œ2Sā€

2S is for Two-Spirit. I don't know exactly what it is, but I think I heard it's some sort of double-gender thing that some indigenous people of somewhere, I think, have or had.

There aren't very many Democrats or progressives on this forum and I'd hazard to guess most of them view trying to push back to be a waste of time

This is likely true. But as a progressive Democrat myself, I wonder how many people here are like me in that I don't particularly want to push back but rather read and learn. It's pretty easy to see countless arguments that Donald Trump is a particularly norm-breaking POTUS practically everywhere I look, but it's harder to see arguments of the "sore loser" theory, especially any good or strong versions of those arguments. A large part of my motivation in reading posts in this forum is to see such things in the hopes that they actually challenge my biased perspective on various CW issues including Donald Trump, in the hopes that I can form a more accurate view of them.

For this particular issue, what I'd most prefer to see is a progressive Democrat make a case for the "sore loser" theory and a MAGA Republican make a case for the "Trump was a particularly norm-breaking POTUS in a way that was genuinely dangerous to democracy" theory, not out of charity but out of genuine, heartfelt belief. Because those are the arguments that I would find the most credible and most valuable for triangulating the actual truth of the matter. Unfortunately, such people don't seem to be particularly available, and so I want to see the strongest version of the theory I personally find distasteful or wrong on a visceral level, which is the "sore loser" theory.

Has anyone noticed how much vitriol there is towards AI-generated art? Over the past year it's slowly grown into something quite ferocious, though not quite ubiquitous.

I honestly think it's far closer to the opposite of ubiquitous, but it certainly is quite ferocious. But like so much ferocity that you see online, I think it's a very vocal but very small minority. I spend more time than I should on subreddits specifically about the culture war around AI art, and (AFAIK) the primary anti-AI art echochamber subreddit, /r/ArtistHate, has fewer than 7k members, in comparison to the primary pro-AI art echochamber subreddit, /r/DefendingAIArt, which has 23K members. The primary AI art culture war discussion subreddit, /r/aiwars, has 40K members, and the upvote and commenting patterns indicate that a large majority of the people there like AI art, or at least dislike the hatred against it.

These numbers don't prove anything, especially since hating on AI art tends to be accepted in a lot of generic art and fandom communities, which lead to people who dislike AI art not particularly finding value in a community specifically made for disliking it, but I think they at least point in one direction.

IRL, I've also encountered general ambivalence towards AI art. Most people are at least aware of it, with most finding it a cool curiosity, and none that I've encountered actually expressing anything approaching hatred for it. My sister, who works in design, had no qualms about gifting me a little trinket with a design made using AI. She seems to take access to AI art via Photoshop just for granted - though interestingly, I learned this as part of a story she told me about interviewing a potential hire whose portfolio looked suspiciously like AI art, which she confirmed by using Photoshop to generate similar images and finding that the style matched. She disapproved of it not out of hatred against AI art, but rather because designers they hire need to have actual manual skills, and passing off AI art without specifically disclosing it like that is dishonest.

I think the vocal minority that does exist makes a lot of sense. First of all, potential jobs and real status - from having the previously rather exclusive ability to create high fidelity illustrations - are on the line. People tend to get both highly emotional and highly irrational when either are involved. And second, art specifically has a certain level of mysticism around it, to the point that even atheist materialists will talk about human-manually-made art (or novel or film or song) having a "soul" or a "piece of the artist" within it, and the existence of computers using matrix math to create such things challenges that notion. It wasn't that long ago that scifi regularly depicted AI and robots as having difficulty creating and/or comprehending such things.

And, of course, there's the issue of how the tools behind AI art (and modern generative AI in general) were created, which was by analyzing billions of pictures downloaded from the internet for free. Opinions differ on whether or not this counts as copyright infringement or "stealing," but many artists certainly seem to believe that it is; that is, they believe that other people have an obligation to ask for their permission before using their artworks to train their AI models.

My guess is that such people tend to be overrepresented in the population of illustrators, and social media tends to involve a lot of people following popular illustrators for their illustrations, and so their views on the issue propagate to their fans. And no technology amplifies hatred quite as well as social media, resulting in an outsized appearance of hatred relative to the actual hatred that's there. Again, I think most people are just plain ambivalent.

That, to me, is actually interesting in itself. So far, the culture wars around AI art hasn't seem to have been subsumed by the larger culture wars that have been going on constantly for at least the past decade. Plenty of left/progressive/liberal people hate AI art because they're artists, but plenty love it because they're into tech or accessibility. I don't know so much about the right/conservative side, but I've seen some religious conservatives call it satanic, and others love it because they're into tech and dunking on liberal artists.

I thought this was just another generic bad faith poster, but now that you pointed out the actual meaning of the name, I'm realizing this very well could be Darwin just having some fun with his username. It's been a long time since I've read his posts with any sort of regularity, but this definitely fits the pattern of obviously bad faith strawmanning that I remember.

I've seen a number of posters suggest that he was done in by bad/disingenuous feminist dating advice, implying that women will tell men "Yes, we like to fuck just as much as you do!" and that means you can approach a woman for sex the same way you wish a woman would approach you for sex. But I don't recall ever seeing dating advice, even from feminists, suggesting that any woman wants a proposition like "How about being my no-strings-attached fuck buddy?"

I don't understand the reasoning in these 2 sentences. The latter - "How about being my no-strings-attached fuck buddy?" - is clearly just an instantiation of the former - "Yes, we like to fuck just as much as you do!" and that means you can approach a woman for sex the same way you wish a woman would approach you for sex. It'd be like telling someone that they can order anything from the menu and when they say they want the pizza that's on page 2, responding with "I don't recall ever telling you that you could order pizza."

Note that none of this is me claiming that these gaps can't be real. I'm just saying that if you were a black person seeing how poorly your fellow black people are doing in the world and told "Sorry, it's just your bad luck to be born the race whose dump stat is Intelligence," you would probably have a problem accepting this with equanimity.

This isn't the message, though. Being born to a particular race certainly can be bad luck depending on the race and society based on the discrimination that goes on in that society. But the average IQ - and more broadly the average of any trait - of your race has no real bearing on your lot in life. It's your own personal intelligence that has the bearing on your life. And that personal intelligence isn't influenced by the average intelligence of your race - it's the other way around, where the average intelligence of your race is influenced by the personal intelligence of you and everyone else in your race, because that's literally how one would calculate that.