@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

However, perhaps I'm frail hearted or something because it does hurt to see so many attack her so viciously, when they clearly have so much hate in their hearts. Perhaps it's Pollyannaish but I wish that we could do our shaming in a more dignified, and less clearly antagonistic way. It seems that most of the people shaming her, from my read at least, clearly enjoy looking down and judging someone harshly, seeing themselves as better than her. From my perspective, that's not just as bad as what she's doing, but still bad.

I agree with this paragraph broadly, but I also see people jumping from this to claiming that Aella has been "bullied" or that people have been "cruel" to her. From what I can tell from the original link to the tweet in your post, she had to actually search her name in order to find these acts of shaming. If these tweets weren't directed at her or perhaps her immediate peers, I don't see how these could be acts of bullying or cruelty. It is perhaps uncouth, even shameful, to speak ill of someone else in a public forum, but it is neither bullying nor cruel. It's only when it's persistent and directed at the target in an unavoidable or difficult-to-avoid way that it can cross that line.

As far as I can tell from reading the post and the tweet, she's just upset that strangers are speaking ill of her and there aren't enough other strangers defending her in response. This seems like entirely a problem she invented for herself by deciding to place boundaries on things that strangers on the internet talk about with each other concerning her. The "I consent!/I consent!/I don't!" meme comes to mind.

Good ideas arise from craft skill, innate talent plus long hours of practice honing your perceptive faculties and understanding of the medium.

I don't think this is a fundamental law of the universe, though. It's a result of the fact that a good idea is only good if it can be implemented in reality, and as such, people familiar with and talented at the craft of implementing ideas to reality - i.e. in the case of images, are skilled illustrators with lots of experience in manually illustrating images - are the ones able to come up with good ideas.

But as long as it results in a good image, the idea behind it is a "good idea," regardless of who came up with the idea or how. Now, people can translate ideas into images without that deep understanding of the medium*, with that translation process bypassing all/most of the skills and techniques that were traditionally required. And because of that bypassing, what constitutes a "good idea" no longer has the same limitations and requirements of being based on one's understanding of those traditional skills and techniques.

* Some may argue that diffusion models are a medium unto itself with its own set of skills to develop and practice, akin to how photography and painting both generate 2D images but are considered different mediums. I'm ignoring this point for now.

I don't think looking for high status outliers is a good way to show that, since the vast majority of people are not high status or influential by any useful definition of the phrase.

Honestly, former prostitutes have better odds of becoming influential just by virtue of being closer to centers of power.

I feel like the outliers in this line of work who really are closer to centers of power are probably so rare that it doesn't change the median or mean all that much. Like any other entertainment industry job, my guess is that 99.999% are nobodies without any greater access to centers of power than a laywoman (pun not intended).

Makes me wonder if in another 20 years or 40 years we'll see these old camgirls auctioning off the chance to be their partner in a snuff film, to die as they lived.

This is so hilariously dark that I hope we see this in some sort of Futurama/Hitchhiker's Guide-esque scifi comedy.

Also, I'm reminded of when I was a teenager running across some photo book at Newbury Comics called "Suicide Girls" which, IIRC, was just softcore porn of women generally in goth makeup and style, but which I mistakenly initially thought featured women who had immediately committed suicide right after. I guess our society hasn't quite reached that level of degeneracy yet.

In my experience so far, for every one AI-generated artpiece that was a genuine improvement over the alternative of "nothing" or "imagining it by reading a text meme", there are 10 thousand pieces of absolute slop that should have never been published with less effort than it took me to scroll past.

I see similar things on my social media, and I feel the exact opposite. The things that people call "AI slop" are, almost universally, things that would have been considered incredible works in the pre-generative AI era. Even today, they often have issues with things like hands, perspective, and lighting, and though they're often very easy to fix, just as often they aren't fixed before they're posted online. But even considering those issues, if someone came across such works in 2021, most people would find them quite aesthetically pleasing, if not beautiful.

So now we're inundated with this aesthetically pleasing slop that was generated and posted thoughtlessly by some lazy prompter, to the point that we've actually grown tired and bored of it. I see this as an absolute win, and I think my experience on the internet has become more pleasant and more beautiful because of it. I see it as akin to how Big Macs have become considered kind of slop food and eating it every day - an option almost anyone in the Western world has - would mark you as low status in many crowds, but for most of human existence, if you had that easy and cheap access to food that was that palatable and that nutritious, you'd be considered to be living an elite life. I think, for such access to such high quality food to have become so banal as to be considered slop is a sign of a great, prosperous world that is better than the alternative. So too for images (and video and music soon, hopefully).

I think there's perhaps a lot of truth to what you think. Giving praise and encouragement to behaviors that feel good but are long-run self-defeating is actually cruelty, not niceness.

I'm reminded of the cliche of fat/ugly women yas queen'd by her friends and being confused why no high value men want to settle down with her. Or what I'd guess is the counterpart of nice guys being praised for being meek and submissive and being confused why he gets no 1st or 2nd dates. I don't know how often either happens, but I'm pretty sure they're cliches for good reason.

Recently, in video games there have been a number of high profile failures by major AAA studios that spent the better part of decade making games that either failed spectacularly (e.g. Concord) or just did mid in sales, nowhere near enough given the dev costs (e.g. Assassin's Creed: Shadows, Dragon Age: The Veilguard), and one common talking point I saw was that these devs probably got nothing but affirmations as they were developing these disasters that appealed to themselves and almost no one in the target audience. And as a result, many of these devs face layoffs and even closure. We don't know if the narrative of internal echo chambers of affirmations is actually correct, but if it is, then these affirmations weren't nice, they weren't kind, they were cruel, for encouraging the devs to create games that would end up dooming their jobs or their studios or both.

Perhaps there are cruel ways to discourage the type of lifestyle that Aella practices. Perhaps most ways of discouragement are cruel. But that doesn't make the encouragement of such any less cruel.

FWIW, I perceive BinaryHobo as making a point purely about the structure of your argument, rather than about the actual trans issue specifically. And I think their point is correct. We can all tell based on their behavior that TRAs can clearly tell the difference between trans women and cis women and have to use linguistic sophistry and "mind-killing," as Arthur Chu might put it, to justify bucketing them into the same category of "women" instead of the former being a sub-category of "men," which is distinct from "women."

But it's at least theoretically possible for someone to honestly, in good faith, have separate distinct sub-categories of things which both fit into a larger category that they share. Like how someone can categorize apples painted orange as "orange apples" which fit into the larger category of "apples" that also include non-painted red apples. As such, the mere usage of "trans women" as a category distinct from "cis women" doesn't necessarily logically imply that they're using linguistic sophistry to paper over their true belief that "trans women" don't fit into the larger category of "women." All the other stuff surrounding it does.

I can't see any theoretical justification for it.

This is the way I always understood it. Lacking the ability to detect any internal experience other than our own, the way we distinguish between 2 different things is by applying input to them and seeing if there's differences in output, e.g. we shine light on it and detect what qualia the light that reflects off of it and into our eyeballs generate in our minds. Detecting intelligence isn't as simple as detecting the color or shape of something and wouldn't involve inputting light rays but rather words to see what words get returned in response. If there's no way to distinguish between 2 different entities in this way, then it makes no sense to say that 1 has human-level intelligence while the other lacks it. For that to be the case, there must be some way to induce different outputs from those 2 things with the same input. In something relating to intelligence, anyway; input-output of words probably don't cover the entirety of all possible detection mechanisms, but they do seem to me to cover a lot.

I don't want to talk to an AI, though. I want to talk to another Motte user who is using their mind to procure text generated by an AI in response to prompts generated by their mind.

Is there such a complex? I thought one of the notable things about the Hawk Tuah girl was just how unusually shrewd she was for being able to leverage that one viral street interview into an actual internet celebrity career.

I almost feel a bit sorry for the assassin. Sans any evidence, my speculation is that he saw the love and adoration Mangione was receiving and decided he wanted some of that by pulling off another senseless ideological murder. But he's just not good looking enough, and the victims not suitably high up on the food chain for him to garner anywhere near the same level of following, IMHO. There's something almost funny about this, him copying Mangione with a cargo cult understanding of the phenomenon, when Mangione himself seemed to have a cargo cult understanding of how assassinations are supposed to work for affecting change.

Then again, I could be completely off about this, and he was a truly devout and deranged ideologue. Or he could gather adoration even more than Mangione. Time will tell, I suppose.

Given hypergamy, I wouldn't be surprised if a woman's wealth - or at least her earnings - are positively correlated with how important she considers her partner to be gainfully employed and to lack a criminal record (which might not lower status in all contexts, but which would provide greater risk in the man's ability to keep earning money).

Now, I've had a few people acknowledge this point, and accept that, sure, some asymptotic limit on the real-world utility of increased intelligence probably exists. They then go on to assert that surely, though, human intelligence must be very, very far from that upper limit, and thus there must still be vast gains to be had from superhuman intelligence before reaching that point. Me, I argue the opposite. I figure we're at least halfway to the asymptote, and probably much more than that — that most of the gains from intelligence came in the amoeba → human steps, that the majority of problems that can be solved with intelligence alone can be solved with human level intelligence, and that it's probably not possible to build something that's 'like unto us as we are unto ants' in power, no matter how much smarter it is. (When I present this position, the aforementioned people dismiss it out of hand, seeming uncomfortable to even contemplate the possibility. The times I've pushed, the argument has boiled down to an appeal to consequences; if I'm right, that would mean we're never getting the Singularity, and that would be Very Bad [usually for one or both of two particular reasons].)

This seems like a potentially interesting argument to observe play out, but it also seems close to a fundamental unknown unknown. I'm not sure how one could meaningfully measure where we are along this theoretical asymptote in relationship between intelligence and utility, or that there really is an asymptote. What arguments convinced you both that this relationship would be asymptotic or at least have severely diminishing returns, and that we are at least halfway along the way to this asymptote?

I absolutely think they should be. Now, maybe it's not practical to check each student's individual political preferences and assign bespoke assignments for them on that basis (which could be gamed anyway). Rather, humanities-based courses should test students on their ability to defend a wide variety of different, highly offensive and ideally "dangerous" ideas in whatever topics are at hand, to stimulate actually learning how to think versus what to think.

Hard to say if that will work, though; teaching students how to think seems to be one of those things that people in education have been trying to do for ever, without there being any sort of noticeable progress whatsoever. I just know that that was how I was educated, and it seemed to work for me and my classmates (but of course I'd think that, and so my belief that it seemed to work should count for approximately nothing), but even if it did, that doesn't mean that it's generalizable.

I can't be 100% sure, but I think even if I hadn't been told, I would have pegged this as LLM-produced. It has the exact sort of "how do you do, fellow human kids?" energy that I'd expect from an LLM that was prompted to create a post that sounded casual, especially the very first paragraph.

The steelman would probably be that they've transitioned from one gender to no gender, rather than transitioning from one gender to another gender.

The true reason is probably that logic is an oppressive cis-heteropatriarchal construct, and this person ended up genuinely feeling like they're whatever identities were most useful and convenient for them in this context, which in this case happened to be both agender and trans.

Tbf to Amadan, the use of 'generative AI' as a description of use case rather than of design is a pretty common one from anti-AI artist and writers.

Hm, I was not aware of that. I'd thought most of such people at least ostensibly maintained a principled objection against generative AI for its training methods, rather than one based on pure protectionism.

That's fair, perhaps this "mania," as you call it, might be the immovable object that matches up to the irresistible force of wokeness. I just think that, sans a definitive proof, any denial of LLM-usage from an author that is deemed as sufficiently oppressed would be accepted at face value, with any level of skepticism deemed as Nazi-adjacent and appropriately purged.

Now I'm imagining a scandal where someone publishes a sort of postmodern scifi novel that they claim to be the unedited ChatGPT log where they had it write a novel piece by piece, publishing all the prompts they input between segments and all, but it comes out that, actually, the author fraudulently crafted the novel, writing each and every word the old fashioned way like a novelist in the pre-LLM era. Bonus points if it was written by hand, as revealed by a notebook with the author's handwriting showing the rough drafts.

Bonus bonus points if it's then revealed later on that the handwritten manuscript was actually created by an advanced 3D printer working off a generative AI based on a prompt written by the author.

I see a couple of issues with that scenario.

One is that there will almost always be plausible deniability with respect to LLM usage. There would have to be a slip-up of the sort of including meta-text that chatbot-style LLMs provide - something like "Certainly! Here is the next page of the story, where XYZ happens." - for it to be definitive proof, and I'd expect that the audience and judges would pick up on that early enough to prevent such authors from becoming high status. That said, it could still get through, and also someone who did a good enough job hiding this early on could slip up later in her career, casting doubt on her original works.

But the second, bigger issue, is that even if this were definitively proven, with the author herself outright claiming that she typed in a one-word prompt into ChatGPT 10 to produce all 70,000 words of her latest award-winning novel, this could just be justified by the publishing industry and the associated awards on the basis of her lacking the privilege that white/straight/cis/male authors have, and this LLM usage merely ensures equity by giving her and other oppressed minorities the writing ability that privileged people are just granted due to their position in this white supremacist patriarchal society. Now, you might think that this would simply discredit these organizations in the eyes of the audience, and it certainly will for some, but I doubt that it would be some inflection point or straw that breaks the camel's back. I'd predict that, for the vast majority who are already bought in, this next spoonful would be easy to swallow.

This generalized antipathy has basically been extended to any use of AI at all, so even though the WorldCon committee is insisting there has been no use of generative AI

(Emphasis added).

If they admit to using ChatGPT, how can they claim they didn't use generative AI? ChatGPT and all LLMs are a type of generative AI, i.e. they generate strings of text. ChatGPT, I believe, is also trained on copyright-protected works without permission from the copyright holders, which is the criterion many people who hate AI consider to qualify as the generative AI "stealing" from authors and artists.

Just based on this description, it sounds like these WorldCon people are trying to thread a needle that can't be. They should probably just say, "Yes, we used generative AI to make our lives easier. Yes it was trained on copyright protected works without permission. No, we don't think that's 'stealing.' Yes, this technology might replace authors like you in the future, and we are helping to normalize its usage. If you don't like it, go start your own AIFreeWorldCon with blackjack and hookers."

I'm a Catholic, and not a particular fan of Trump, and I found the picture both inevitable and mildly amusing.

Only mildly amusing? I found it holy amusing!

A part of this that hadn't occurred to me until I saw it pointed out is that there seems to be a sort of donation duel between this lady's case and that of Karmelo Anthony, who's a black teen charged with murdering a white teen during a track meet by stabbing him in the heart during a dispute over seating. I think there was a top-level comment here about this incident before, but there was a substantial amount of support on social media for Anthony on racial grounds, including fundraising for his defense. I get the feeling that a lot of the motivation to donate to this lady is by people who feel that the support Anthony has been getting on racial grounds has been unjust, and supporting her is a way of "balancing the scales," as it were. This isn't the instantiation of "if you tell everyone to focus on everyone's race all the time in every interaction, eventually white people will try to play the same game everyone else is encouraged to" that I foresaw, but it sure is a hilarious one.

Now, one conspiracy theory that I hope is hilariously true, is that the guy who recorded this lady was in cahoots with the lady herself and staged the whole thing in order to cash in on the simmering outrage over the Anthony case. But I doubt that anyone involved has the foresight to play that level of 4D chess.

I don't think either are particularly moral, and it's a cultural battle to be waged against both. I don't think we'll ever convince fellow humans to stop lying to manipulate people, but I can at least imagine a world where we universally condemn media companies who publish AI slop.

So I do think there's a big weakness with LLMs in that we don't quite have a handle on how to robustly or predictably reduce hallucinations like we can with human hallucinations and fabrications. But that's where I think the incentive of the editors/publishers come into play. Outlets that publish falsities by their human journalists lose credibility and can also lose lawsuits, which provide incentives for the people in charge to check the letters their human journalists generate before publishing them, and I see similar controls as being effective for LLMs.

Now, examples like Rolling Stone's A Rape on Campus article show that this control system isn't perfect, particularly when the incentives for the publishers, the journalists, and the target audience are all aligned with respect to pushing a certain narrative rather than conveying truth. I don't think AI text generators exacerbate that, though.

I also don't think it's possible for us to enter a world where we universally condemn media companies who publish AI slop, though, unless "slop" here refers specifically to lies or the like. Given how tolerant audiences are of human-made slop and how much cheaper AI slop is compared to that, I just don't see there being enough political or ideological will to make such condemnation even a majority, much less universal.

Personally: AI-hallucinated quotes are worse than fabricated quotes, because the former masquerades as journalism whereas the latter is just easily-falsifiable propaganda.

AI-hallucinated quotes seem likely to be exactly as easy as falsifiable as human-fabricated quotes, and easily-falsifiable propaganda seems to be an example of something masquerading as journalism. These just seem like describing different aspects of the same thing.

Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?

I'm not Dean, but I would agree with this. I didn't have a meaningful opinion on Yahoo Entertainment, but, assuming that that article was indeed entirely AI-generated, the fact that it was produced that way wouldn't reflect negatively or positively on them, by my view. Publishing a falsehood does reflect negatively, though. As for Silver (is it not Nate?), I don't expect pundits to fact-check every part of an article before linking it, especially a part unrelated to the point he was making, and so him overlooking the false quote doesn't really surprise me. Though, perhaps, the fact that he chose to link a Yahoo Entertainment article instead of an article from a more reputable source reflects poorly on his judgment; this wouldn't change even if Yahoo Entertainment hadn't used AI and the reputable outlet had.