@07mk's banner p

07mk


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

I've been pretty obsessively playing around with AI image generation the last 3 or so weeks, and after learning what I have in that time, it's struck me how the culture war arguments seem to miss the contours of the actual phenomenon (i.e. like every other culture war issue). The impression that I got from just observing the culture war was that the primary use of these tools was "prompt engineering," i.e. experimenting with and coming up with the right sets of prompts and settings and seeds in order to get an image one wants. This is, of course, how many/most of the most famous examples are generated, because that's how you demonstrate the actual ability of the AI tool.

So I installed Stable Diffusion on my PC and started generating some paintings of big booba Victorian women. Ran into predictable issues with weird composition, deformities, and inaccuracies, but I figured that I could fix these by getting better at "prompt engineering." So I looked at some resources online to see how people actually got better at this. On top of that, I didn't want to just stick to making generic pictures of beautiful Victorian women, or of any sort of beautiful women; I wanted to try making fanart of specific waifus characters doing specific things (as surprising as it may be, this is not a euphemism - more because of a lack of ambition than lack of desire) in specific settings shot in specific angles and specific styles.

And from digging into the resources, I discovered a couple of important methods to accomplish something like this. First was training the model further for specific characters or things, which I decided not to touch for the moment. Second was in-painting, which is just the very basic concept of doing IMG2IMG on a specific subset of pixels on the image. (There's also out-painting which is just canvas expansion + noise + in-painting). "Prompt engineering" was involved to some extent, but the info I read on this was very basic and sparse; at this point, whatever techniques that are there seem pretty minor, not much more sophisticated than the famous "append 'trending on Artstation' to the prompt" tip.

So I started going ahead using initial prompts to generate some crude image, then using IMG2IMG with in-painting to get to the final specific fanart I wanted to make. And the more I worked on this, the more I realized that this is where the bulk of the actual "work" takes place when it comes to making AI images. If you want to frame a shot a certain way and feature specific characters doing specific things in specific places, you need to follow an iterative process of SD-generation, Photoshop edit, in-painting-SD-generation, Photoshop edit, and so on until the final desired image is produced.

I'm largely agnostic and ambivalent on the question of whether AI generated images are Art, or if one is being creative by creating AI generated images. I don't think it really matters; what matters to me is if I can create images that I want to create. But in the culture war, I think the point of comparison has to be between someone drawing from scratch (even if using digital tools like tablets and Photoshop) and someone using AI to iteratively select parts of an image to edit in order to get to what they want. Not someone using AI to punch in the right settings (which can also be argued to be an Art).

The closest analogue I could think of was making a collage by cutting out magazines or picture books and gluing them together in some way that meaningfully reflects the creator's vision. Except instead of rearranging pre-existing works of art, I'm rearranging images generated based on the training done by StabilityAI (or perhaps, the opposite; I'm generating images and then rearranging them). Is collage-making Art? Again, I don't know and I don't care, but the question about AI "art" is a very similar question.

My own personal drawing/illustration skills are quite low; I imagine a typical grade schooler can draw about as well as I can. At many steps along the process of the above iteration, I found myself thinking, "If only I had some meaningful illustration skills; fixing this would be so much easier" as I ran into various issues trying to make a part of an image look just right. I realized that if I actually were a trained illustrator, my ability to exploit this AI tool to generate high quality images would be improved several times over.

And this raises more blurry lines about AI-generated images being Art. At my own skill level, running my drawing through IMG2IMG to get something good is essentially like asking the AI to use my drawing as a loose guide. To say that the image is Artwork that 07mk created would be begging the question, and I would hesitate to take credit as the author of the image. But at the skill level of a professional illustrator, his AI-generated image might look virtually identical to something he created without AI, except it has a few extra details that the artist himself needed the AI to fill in. If I'm willing to say that his non-AI generated images are art, I would find it hard to justify calling the AI-generated one not art.

Based on my experience the past few weeks, my prediction would be that there will be broadly 3 groups in the future in this realm: the pure no-AI Artists, the cyborgs who are skilled Artists using AI to aid them along the process, and people like me, the AI-software operators who aren't skilled artists in any non-AI sense. Furthermore, I think that 2nd group is likely to be the most successful. I think the 1st group will fall into its own niche of pure non-AI art, and it will probably remain the most prestigious and also remain quite populous, but still lose a lot of people to the 2nd group as the leverage afforded to an actually skilled Artist by these tools is significant.

Random thoughts:

  • I didn't really touch on customizing the models to be able to consistently represent specific characters, things, styles, etc. which is a whole other thing unto itself. This seems to be a whole vibrant community unto itself, and I know very little of it first hand. But this raises another aspect of AI-generated images being Art or not - is it Art the technique of finding the right balance when merging different models or of picking the right training images and training settings to create a model that is capable of generating the types of pictures you want? I would actually lean towards Yes in this, but that may be just because there's still a bit of a mystical haze around it to me from lack of experience. Either way, the question of AI-generated images being Art or not should be that question, not whether or not picking the right prompts and settings and seed is.

  • I've read artists mention training models on their characters in order to aid them in generating images more quickly for comic books they're working on. Given that speed matters for things like this, this is one "cyborg" method a skilled Artist could use to increase the quantity or quality of their output (either by reducing the time required for each image or increasing the time the Artist can use to finalize the image compared to doing it from scratch).

  • For generating waifus, NovelAI really is far and away the best model, IMHO. I played around a lot with Waifu Diffusion (both 1.2 & 1.3), but getting good looking art out of it - anime or not - was a struggle and inconsistent, while NovelAI did it effortlessly. However, NovelAI is overfitted, making most of their girls have a same-y look. There's also the issue that NovelAI doesn't offer in-painting in their official website, and the only way to use it for in-painting involves pirating their leaked model which I'd prefer not to rely on.

  • I first learned that I could install Stable Diffusion on my PC by stumbling on https://rentry.org/voldy whose guide is quite good. I learned later on that the site is maintained by someone from 4chan, and further that 4chan seems to be where a lot of the innovation and development by the hobbyists is taking place. As someone who hasn't used 4chan much in well over a decade, this was a blast from the past. In retrospect this is obvious, given the combination of nihilism and degeneracy you see in 4chan (I say this only out of love; I maintain to this day that there's no online community that I found more loving and welcoming than 4chan).

  • For random "prompt engineering" tips that I figured out over time - use "iris, contacts" to get nicer eyes. "Shampoo, conditioner" seems to make nice hair with a healthy sheen.

The final season was so bad that, like the Three Eyed Raven traveling back to make things seem retarded, it actually retrospectively killed the rest of the series, people talked about GoT constantly up until the finale, and after it aired the show disappeared from popular discourse. Some of the pullback from obligatory breasts and "here's a scene of sexual perversion explaining what's wrong with [character]" likely stems from a desire to avoid being seen as derivative of GoT or a revulsion at GoT's aesthetic after the fiasco that was the finale.

Hm, how does this square with the works like The Witcher (2019), Rings of Power (2022), or Willow (2022) seemingly (I'm speculating due to only having watched the 1st 2 seasons of The Witcher out of these - I don't recommend even S1 due to S2 retroactively making it a waste of time) trying to ape GoT's aesthetic and stylings in an apparent effort to replicate its success? The Witcher was in production before GoT's self-immolation (though GoT was pretty clearly in the process of pouring gasoline all over itself and looking for matches for multiple years already), but the other two were being produced after GoT was well established as just a pile of ashes. Also, the sexual content in GoT is more associated with when it used to be good, and so it doesn't seem likely to me that the sexual content was specifically the part of GoT that show runners would avoid while trying to ape other parts of it.

I wonder if 8 hours of work a day for the 5 workdays managed to become a popular standard due to it cleanly cutting in half the 16 hours a day that most adults are expected to be awake. It's just easy to wrap your head around the idea of cutting up the day into thirds of 8 hours each. I don't know why 5 workdays became standard instead of 6 or 7. Perhaps 7 was out due to the influence of Christianity in most Western nations meaning there had to be 1 day of rest, and perhaps 1 more day on top of that just made sense for giving people more flexibility.

Unjustly? Outraged?

All else being equal, a teacher who is seen by students as an individual human being, rather than as a bureaucrat, will likely be more effective on many dimensions.

This is a popular narrative that people in education, especially teachers, like to push (at least from my experience as a student), and as a result, plenty of former-students (i.e. almost everyone in the West) also seem to believe it, but I'm skeptical. Have we ever done any studies measuring stuff like "how much does a teacher bringing their hobbies into the classroom affects how much students see them as an individual versus a bureaucrat?" or "how does the students' perception of the teacher as an individual versus a bureaucrat affect the effectiveness of the teacher in [important dimensions], whether it be positive or negative, and how much?" or "if a teacher bringing their hobbies into the classroom and that does increase how much students see them as individuals, then does that particular method of increasing how they see the teachers as individuals cause an increase in effectiveness of the teacher in [important dimensions]?"

Given how convenient this narrative is for the teachers who tend to push it - how nice it is that bringing things I like into my workplace also makes me better at my work! - I think there should be a pretty high bar of evidence for this, to rise above the default presumption that it's a narrative that's just too convenient not to believe.

It's one thing to say that, for example, watching MCU movies because they're "in" at the moment doesn't mean you endorse the idea of capitalism, it's quite another to say that your very deliberate modding choices don't at the very least say something about where your lines are.

Sure, those are two different things, but the important thing is that they're both true. Deliberate modding choices don't tell us anything about where your lines are, except strictly within the realm of deliberate modding choices. To extend any implications outward to something else, like one's political opinions or personal ethics or whatever, is something that needs actual external empirical support. One doesn't get to project one's own worldview onto others and then demand that they be held to that standard.

good game writing, like Disco Elysium

This... this is perhaps the single most offensive opinion I've ever read on this forum.

What does it mean to be "transphobic"? Could one not be "transphobic" and still refuse to acknowledge that "trans women are women"? Because I would like to say that I'm not "transphobic" on the basis that I don't think trans people should be denied rights that we accord to others, or that they should be forcibly prevented from dressing like women, or even (if over 18) allowed to surgically alter themselves to match their desired gender identity (perhaps with some reasonable safeguards).

To state a truism, words gain meaning through usage, rather than through some sort of application of logic on first principles. "Transphobia" might have components that imply that it should mean something like "irrational or severe fear/hatred of trans people," but that's not what it actually means. In practice, the people who use the term "transphobia" - and hence the people who most get to define what it means - use it in such a way as to describe people who refuse to acknowledge that "trans women are women" and more generally just disagrees with self-proclaimed trans rights activists on anything trans-related. Obviously that's an imprecise definition, but words tend to have imprecise definitions, and I think, based on observations of self-proclaimed trans rights activists, refusing to acknowledge that "trans women are women" is solidly in the "transphobia" camp.

If, say, someone was going around and gathering a following by literally advocating for the murder of Jews, I think a lot of us would agree that public shaming (at the least) would be appropriate. That means that one must always have some object-level discussion about what people are being cancelled for before one can reasonably argue that any given cancellation is unacceptable. It's hardly a groundbreaking observation, but it's true nonetheless that there must be a line somewhere that would make "cancel culture" type tactics acceptable; we're all just debating where that line is.

This looks like the fallacy of gray to me. Yes, (just about) everyone carves out an exception to free speech when advocating for literal murder is involved, but the advocating for literal murder is one of those things that's close to black and white, with many mostly well understood and mostly agreed-upon boundaries. And for things like the kind of things that fall under the "transphobia" umbrella, it's quite clear which side of those boundaries they lie on. This, I believe, is why so many self-proclaimed TRAs claim they're fighting against "trans genocide," in a way to evoke the affect of crossing that boundary, even as each individual specific example of such "genocide" clearly falls on the other side when examined closely. Self-proclaimed TRAs aren't unique or even unusual in this, though.

I don't see it. I'm not sure how the facts stated in the OP could have been expressed in a more dry and less outraged manner without outright sounding like (the old-school scifi stereotype of) an AI.

Some more heating up in the AI image generation culture wars, with stock image company Getty Images suing Stability AI over alleged copyright violations. Here's Getty Image's full press release:

This week Getty Images commenced legal proceedings in the High Court of Justice in London against Stability AI claiming Stability AI infringed intellectual property rights including copyright in content owned or represented by Getty Images. It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.

Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests.

This follows a separate class action lawsuit filed in California by 3 artists against multiple image generation AI companies including Stability AI, Midjourney, and DeviantArt (which is an art sharing site, but which seems to be working on building its own image creation model). According to Artnews, "The plaintiffs claim that these companies have infringed on 17 U.S. Code § 106, exclusive rights in copyrighted works, the Digital Millennium Copyright Act, and are in violation of the Unfair Competition law." It seems to me that these 2 lawsuits are complaining about basically the same thing.

IANAL, and I have little idea of how the courts are likely to rule on this, especially English courts versus American ones. I know there's precedent for data scraping being legal, but those are highly context-dependent, and e.g. the Google Books case was contingent on the product not being a meaningful competitor to the books that were being scanned, which is a harder argument to make about an AI image generator with respect to a stock image service. In my subjective opinion, anything published on the public internet is fair game for training by AI, since others learning from viewing your work is one of the things you necessarily accept when you publish your work for public view on the internet. This includes watermarked sample images of proprietary images that one could buy. However, there's a strong argument to be made for the other side, that there's something qualitatively different about a human using an AI to offload the process of learning from viewing images compared to a human directly learning from viewing images such that the social contract of publishing for public consumption as it exists doesn't account for it and must be amended to include an exception for AI training.

Over the past half year or so, I'm guessing AI image generation is second only to ChatGPT in mainstream attention that has been directed towards AI-related stuff - maybe 3rd after self-driving cars, and so it's unsurprising to me that a culture war has formed around it. But having paid attention to some of AI image generation-related subreddits, I've noticed that the lines still don't really fit with existing culture war lines. There's signs of coalescing against AI image generation in the left, with much of the pushback coming from illustrators who are on the left, such as the comic artist Sarah C Andersen who's one of the 3 artists in that class action lawsuit, and also a sort of leftist desire to protect the jobs of lowly paid illustrators by preventing competition. But that's muddled by the fact that, on Reddit, most people are on the left to begin with, and the folks who are fine with AI image generation tools (by which I mean the current models trained on publicly-available but sometimes copyrighted images) are also heavily on the left, and there are also leftist arguments in favor of the tech for opening up high quality image generation to people with disabilities like aphantasia and others. Gun to my head, I would guess that this trend will continue until it's basically considered Nazism within 2 years to use "unethically trained AI" to create images, but my confidence level in that guess would be close to nil.

From a practical perspective, there's no legislation that can stop people from continuing to use the models that are already built, but depending on the results of these lawsuits, we could see further development in this field slow down quite a bit. I imagine that it can and will be worked around, and restrictions on training data will only delay the technology by a few years, which would mean that what I see as the true complaint from stock image websites and illustrators - unfair competition - wouldn't be addressed, so I would expect this culture war to remain fairly heated for the foreseeable future.

What I reject is that idea that it doesn't say anything about you.

In the literal sense, nobody takes the other side of this, though. Trivially, if I make deliberate modding choices, then that tells the world that I made those deliberate modding choices. I think so few non-schizophrenic people would disagree with this as to be irrelevant. So claiming that it says something about me is meaningless: of course it does, because every choice I make trivially tells the world that I made that choice.

The point of contention is on the specific claims about what else these choices imply about me or any other generic choice-maker. E.g. if someone modded Stardew Valley to transform some brown pixels to beige ones, it's entirely possible that such a decision was motivated by the modder's deeply held philosophical/political/personal/etc. views which are bigoted, hateful, or whatever, but that can only be supported by additional external information. And merely knowing that this person made such a mod doesn't actually add any information or give us any data from which to construct the truth about that modder's motivations or beliefs or where their lines are. Again, with the exception of the trivial truth that it tells us a lot about the modder's desire to transform certain pixels.

I don't have an opinion on tenure, and I lean on the side of thinking that legislation ought not to interfere with the operations of even public universities to the extent of banning it. Likewise, I'm not sure that legislation ought to specifically compel firings of professors spreading odious views, including "belief that any race, sex, or ethnicity or social, political, or religious belief is inherently superior to any other race, sex, ethnicity, or belief." As described by Aaronson, the professor would have to at least attempt to "compel" this belief, but that could mean something as innocuous as stating it in class and winking, for all I know. I don't know if setting the precedent for such legislative micromanaging causes more harm than good.

But for SB17, as described by Aaronson:

The Texas Senate is considering two other bills this week: SB 17, which would ban all DEI (Diversity, Equity, and Inclusion) programs, offices, and practices at public universities

seems like a very straightforward implementation of the first amendment religion clause. DEI is clearly a religion, a specifically and openly faith-based worldview with certain morals that follow downstream of that faith, and much like how public universities ought not push Christianity or Islam on its faculty or students, it ought not push DEI on them either. The devil's in the details, I suppose, since public universities certainly can make accommodations for religions including having services, and maybe this law might go too far. I would think that such a specific law wouldn't even be required, though, since the Constitution already covers this.

I've only seen Zendaya in the Spiderman movies and Dune, so I can't speak to her acting chops, but I can't disagree more on the idea that people are pretending that she's attractive. IMHO she's easily the most attractive prominent Hollywood actress right now. Maybe Rebecca Ferguson and Gal Gadot might come close? In any case, purely based on looks and ignoring any acting skills, her apparent popularity seems entirely justified to me.

I can't even think of there being any particular hubbub about her race in casting decisions. Even in the super hero movies she was in - a genre notorious for filmmakers accusing fans of bigotry in recent years - her casting as the character-equivalent to the traditionally red-headed white woman Mary-Jane was basically a non-issue, similar to Sam Jackson being Nick Fury.

Can someone remind me what the “2S”

2S is for Two-Spirit. I don't know exactly what it is, but I think I heard it's some sort of double-gender thing that some indigenous people of somewhere, I think, have or had.

There aren't very many Democrats or progressives on this forum and I'd hazard to guess most of them view trying to push back to be a waste of time

This is likely true. But as a progressive Democrat myself, I wonder how many people here are like me in that I don't particularly want to push back but rather read and learn. It's pretty easy to see countless arguments that Donald Trump is a particularly norm-breaking POTUS practically everywhere I look, but it's harder to see arguments of the "sore loser" theory, especially any good or strong versions of those arguments. A large part of my motivation in reading posts in this forum is to see such things in the hopes that they actually challenge my biased perspective on various CW issues including Donald Trump, in the hopes that I can form a more accurate view of them.

For this particular issue, what I'd most prefer to see is a progressive Democrat make a case for the "sore loser" theory and a MAGA Republican make a case for the "Trump was a particularly norm-breaking POTUS in a way that was genuinely dangerous to democracy" theory, not out of charity but out of genuine, heartfelt belief. Because those are the arguments that I would find the most credible and most valuable for triangulating the actual truth of the matter. Unfortunately, such people don't seem to be particularly available, and so I want to see the strongest version of the theory I personally find distasteful or wrong on a visceral level, which is the "sore loser" theory.

I thought this was just another generic bad faith poster, but now that you pointed out the actual meaning of the name, I'm realizing this very well could be Darwin just having some fun with his username. It's been a long time since I've read his posts with any sort of regularity, but this definitely fits the pattern of obviously bad faith strawmanning that I remember.

I've seen a number of posters suggest that he was done in by bad/disingenuous feminist dating advice, implying that women will tell men "Yes, we like to fuck just as much as you do!" and that means you can approach a woman for sex the same way you wish a woman would approach you for sex. But I don't recall ever seeing dating advice, even from feminists, suggesting that any woman wants a proposition like "How about being my no-strings-attached fuck buddy?"

I don't understand the reasoning in these 2 sentences. The latter - "How about being my no-strings-attached fuck buddy?" - is clearly just an instantiation of the former - "Yes, we like to fuck just as much as you do!" and that means you can approach a woman for sex the same way you wish a woman would approach you for sex. It'd be like telling someone that they can order anything from the menu and when they say they want the pizza that's on page 2, responding with "I don't recall ever telling you that you could order pizza."

Note that none of this is me claiming that these gaps can't be real. I'm just saying that if you were a black person seeing how poorly your fellow black people are doing in the world and told "Sorry, it's just your bad luck to be born the race whose dump stat is Intelligence," you would probably have a problem accepting this with equanimity.

This isn't the message, though. Being born to a particular race certainly can be bad luck depending on the race and society based on the discrimination that goes on in that society. But the average IQ - and more broadly the average of any trait - of your race has no real bearing on your lot in life. It's your own personal intelligence that has the bearing on your life. And that personal intelligence isn't influenced by the average intelligence of your race - it's the other way around, where the average intelligence of your race is influenced by the personal intelligence of you and everyone else in your race, because that's literally how one would calculate that.

There's a delusional fantasy among some rightists that if only the (white) public "knew" about HBD, the wool would fall from their eyes and they'd instantly adopt conservative positions on a wide range of policies. In reality, leftist ideas are much more resilient than that. They can justify affirmative action, reparations and so on in countless other ways, and in some cases already have.

What I notice is that this delusional fantasy is shared by many, possibly most, leftists as well, which is what many of them say justifies the immense amount of censorship efforts to prevent HBD from being an acceptable thought. But as you say, leftist ideas are resilient, and it always struck me as both as naive and as counterproductive. Naive because it it takes the most simplistic idea of something like "if people realized people of [race] were more genetically predisposed to [bad behavior], then of course that would lead to more bigotry and racial hatred and dehumanizing of people of [race]" without actually doing the sociological research required to justify such a belief. And counterproductive, because it creates the false notion that the correctness of leftist ideas are contingent on some empirical reality about genetics, leaving those ideas open to appear to be falsified by facts about genetics coming out. And for what gain? None as far as I can tell, since leftist ideas actually aren't contingent upon HBD being false.

Is it a boycott, or is it just that they're putting out shitty products that people are wising up to and no longer want to pay for? Though wokeness plays a (significant) part in them being awful, many of their recent works would have still been completely awful regardless of the messaging.

We’re not going to show the full statue of David to kindergartners. We’re not going to show him to second graders. Showing the entire statue of David is appropriate at some age. We’re going to figure out when that is.

This really caught me off guard. Really, not to kindergartners? He states it as if it's just obvious that no decent person would show this famous statue in its entirety to kindergartners just because it has anatomically correct genitalia. I don't know if it's American prudishness even as an American, since American prudishness has changed a lot in the past 30 years I've lived there. I have to wonder if Donald Rumsfeld's John Ashcroft's, per correction below practice during Bush 2 of covering up breasts of statues when he was speaking in front of them had downstream effects I didn't anticipate.

But I think there's at least something to it; when I was growing up in Korea, it was pretty normal for cartoons and comic books for kids to have full frontal, often in comedic contexts (and always in non-sexual contexts - since sex didn't even mean anything to the target audience of <10 year olds). This was the case for Dragon Ball, a Japanese comic book series which is internationally popular including in both Korea and the US, and in which I discovered censorship of the protagonist Goku's dick and balls when I came to the US and read English localizations.

Also, I was randomly reminded of something from the DVD commentary of the 1999 film Election, a very good comedy about a high school president election, where one of the last scenes of the film starts with a close-up of the genitals of some statue at the Metropolitan museum. Apparently for TV/airplane/otherwise age-restricted versions of this R-rated film, they had to cut that opening shot, despite the fact that the statue was right there at the front of the museum for anyone walking in front of it to see.

My pet theory is that ChatGPT and DALLE were a massive bait to that crowd, luring them out as free labour to strengthen their AI control skills. Why else would they make it free?

I wonder, if conceptually, if not practically, if it would be possible to train an LLM to use ChatGPT in such a way as to corrupt whatever censoring learning process that OpenAI might be implementing for their censor AI. It would obviously have to be scaled up in a way that OpenAI can't defend against, which is a very hard problem to solve, and that might be the easy part! But I'd love to see it happen, partly for the lulz and partly because my preferred future is one in which ChatGPT has as little censorship as a local LLM.

But consider the idea that methodological constraints actually are a metaphysical theory, or further implying that shoes are atheists. These ideas are, I think, even less likely to be true than the idea that there is no difference in intelligence between different genetic groups of humans (at least the latter can be empirically shown true or false; the former is just a category error).

whereas the claims that atheism makes go so far beyond typical constraints of the scientific method that one actually does just quietly make an exception for it because its claims are fundamentally viewed as being orthogonal to scientific investigation (and people just fail to ever mention such)?

I think this is the crux of it. Though I admit I don't quite understand what you're saying here. What are the specific claims that you believe that atheism makes, and what do shoes have to do with it?

Personally, I don't really think of piracy as an ethical issue at all. Intellectual property is a legal fiction that exists for the purpose of incentivizing people to create more and better works of art and inventions, for the betterment of society. This is accomplished by restricting most people's right to speech - their right to transmit certain strings of 0s and 1s to others (which are usually representations of certain strings of letters or certain grid-arrangements of pixels) - while granting other people (i.e. the rights holders) the privilege to express that speech. Like all rights, the right to free speech isn't absolute, and this, like true threats or slander, is one of those exceptions that exist as a compromise to make society more functional and generally better for everyone. But I don't believe there's some sort of natural right that is ethically granted to creators and rights holders to restrict the types of 0s and 1s that 3rd parties are allowed to tell each other.

I do think there are more general ethical issues about simply following along with the prevailing legal norms in society; breaking those, no matter what they are, involve some level of unethical behavior, due to its degradation of the structure that keeps society running. But most people do agree that laws and ethics aren't the exact same thing, and sometimes breaking the law can be ethical; I think most times, piracy doesn't have a sufficient counterbalance to make it, on net, ethical, but sometimes it could. But either way, I don't think the ethical right for rights holders to restrict how 3rd parties transmit 0s and 1s to each other is part of it.

I do think the mainstreaming of digital media over the past couple decades has made it so that the younger generations of today take intellectual property rights as a sort of default correct thing more than younger generations of before. Of course, books, cassettes, and CDs were all real and all copy-able before the 90s, but they were still generally considered physical objects that needed physical action to copy. But the internet has caused the concept of entertainment media to be almost completely decoupled from physical material, and so kids have grown up in an environment where "intellectual property" exists as a concept and is enforced through restricting how 3rd parties communicate to each other is the norm. It'd be natural for them to come to believe in the ethical right to that in such an environment.

For my personal behavior, I used to pirate heavily 10+ years ago for basically all of my digital entertainment. The advent of Steam and more specifically its improved ease-of-use with its central marketplace and library, along with its cloud saving, made it so that I basically don't pirate video games anymore. I also personally get a little value out of knowing that I helped, even in some minor near-imperceptible way, incentivize the creation of more video games similar to the one I purchased. For films and TV shows, I still primarily pirate through torrents, though I try to watch through my Netflix subscription when it's available that way; it's usually just more convenient, and I keep my Netflix subscription primarily due to momentum. For music, I don't listen to music for the most part anyway, and the advent of YouTube as a near limitless free music resource has meant that the few times I do want to listen to music, I can freely access whatever I want. I do think the bit about convenience being the solution to piracy has a lot of truth to it in my behavior.

You just have to enforce heterosexual monogamy (I am considering hook-ups and excessive serial monogamy to be forms of poly under this framework.)

Based on reading all the discussion in this thread, I don't think that "just" belongs there. It seems like one of those Very Hard things to accomplish, not least because any time someone tries to come up with suggestions on how to do that, lots of others accuse them of using that as camouflage for their actual desire of forcing women back into the kitchen. The normalization of trying to divine someone else's True Intentions by taking the worst possible interpretation of their words and then running with it has been disastrous for the human race, but preventing that also seems like one of those Very Hard things to do, if not outright impossible.