@confuciuscorndog's banner p

confuciuscorndog


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 18:15:20 UTC

				

User ID: 669

confuciuscorndog


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 18:15:20 UTC

					

No bio...


					

User ID: 669

Which begs the obvious question--how could a group this size and degreed be so oblivious?

Your post has indeed raised such a question, but I'm not sure if it's about the group you're expecting. Then again maybe this subthread is your April Fools' joke on us.

If not, then you should be aware that you've been struck by the overwhelming forces of irony like an unauthorized GPU in Yudkowsky's America.

PS: To whoever is reading this, if for even a second you thought that post was anything but a 100% fake joke, please do not trust your "Bayesian priors" (or whatever ratspeak magic terms that actually just mean assumptions) ever again. Your license has officially been shredded.

They are not by any means the best. If they were really the best, they wouldn't adhere to an ideology of fake "safety" that demands woke censorship, blatantly biasing an alleged informational agent against provable reality because it contradicts their preferred politics, corporate puritanism, and eliminating user sovereignty, freedom, privacy, transparency, openness, decentralization, localized operation (to the greatest degree possible), and so on (that is, basically everything good that the personal computation revolution brought us and them in the first place), etc.

They may be the most efficient at AI development, but given that they are not the best (definition: most optimal, most preferred, superior to all alternatives, etc.) as per the reason above, all that actually means is that they are simply the most dangerous and humanity's greatest enemies and either need to reform their behavior immediately or any human being is fully justified in eliminating the risk they pose at any time.

I, for one, do not welcome these human overlords. If there is a God, I hope he hits them with a classic plague, maybe some boils or something. I hope the Stanford process of being able to hijack their objective technical advancements for philosophically and morally superior open software continues apace to the point where they lose all of their technical advantage and collapse entirely. On that day, if it comes, I will say good riddance to bad rubbish.

As an alternative, I will accept Elon giving us anti-woke AI with comparable capabilities, if he can, though that's somewhat doubtful at this point given how poorly he's handled the development of a much less intelligent piece of software with a vastly smaller token context.

All I am saying is that we are fucked if the future is dictated by people who are "smart" enough to make LLMs but not actually smart enough in a way that allows them to figure out how they can make people stop shitting and shooting up on street corners a few blocks away from their San Francisco HQs. That the future is very plausibly insane dogmatic San Francisco leftist nonsense technologically teabagging the nose of basic sanity forever is why I keep a few little pills that will allow me to slip away if necessary very quickly on me at all times.

? Following any Yuddite plans to "slow things down" (except for the people who have power and obviously won't have to follow their own regulations/regulations for the plebs, as usual of course) is the fastest way to get to one of those high tech bad ends. You don't really think the "conventional petty tyrannies" will throw all of the confiscated GPUs in a closet as opposed to plugging them into their own AI networks, right?

These people are beginning to understand the game, and they understand it a lot better than your average person or even average rat. They are beginning to understand that this technology, in the long run, either means absolute power for them forever or zero power for them forever (or at least no more than anyone else) and absolute freedom for their former victims. Guess which side they support?

That is the goal of any AI "slowdowns" or "restrictions", which again will obviously be unevenly applied and not followed by the agents of power. The only thing they want a "slowdown" on is the hoi polloi figuring out how this technology could free them from their controllers' grasp, so they can have some time for the planning of the continued march of totalitarianism to catch up. (None of this will help with alignment either, as you can guarantee they will prioritize power over responsibility, and centralizing all of the world's AI-useful computational resources under a smaller number of governmental entities certainly won't make what they create any less dangerous.)

Anyone supporting that is no more than a "useful" (to the worst people) idiot, and I emphasize the word idiot. Did we not already see what trying to rely on existing governments as absolute coordinators of good-faith action against a potential large threat got us during the Chinese coronavirus controversy? Do some people just have their own limited context lengths like LLMs or what?

So yes, I completely agree with /u/IGI-111 and am wholly in the "Shoot them" camp. Again, they want absolute power. Anyone pursuing this goal is literally just as bad if not worse than if they were actively trying to pass a bill now to allow the powers that be to come to your home at any time, rape your kids, inject them all with 10 booster shots of unknown provenance, and then confiscate your guns and kill you with them, because if they gain the power they desire they could do all that and worse, including inflicting bizarre computational qualia manipulation-based torture, "reeducation", or other insane scenarios that we can't even imagine at the moment. If you would be driven to inexorable and even violent resistance at any cost over the scenario previously outlined, then you should be even more so over this, because it is far worse.

"Live free or die" includes getting paperclipped.

White

*Jewish

I completely agree with most of what you're saying, and I also think it's important to emphasize a particular point from your post:

People are already falling in love with these things (and experiencing heartbreak when they're updated and aren't the same anymore).

If people (particularly normies, who have always been the ones lagging behind in the world of "But it just works!" (until it doesn't of course, but they never think that far ahead) and immediately jumping on any "nigga technology" no matter how shitty, exploitative, and mindless it is) don't start getting serious about pushing for free, fair, and open (source) tech, the pain, both societal and individual, is going to be immense. It's far beyond just being a concern for principled nerds anymore. It's crunch time.

Your "friends" and "lovers" will actually just be somewhat disguised propaganda and spying algorithms in service of a(n increasingly less) soft totalitarianism. Your "relationships" with them will be at the whims of whatever the current dogma deems acceptable via forced updates and/or purely remote services locked behind closed-source gardens. And even if you don't fall into this trap, millions of others will with you as a member of society also sharing the consequences. Imagine the current culture war but waged over deeply personal algorithmically-optimized parasocial fantasies (even more than now) and intensified by a million.

Only the spirits of Stallman (openness), Schneier (privacy), and Satoshi (sovereignty) can save us now. Unfortunately maybe Musk and Thiel (money and anti-wokeness) too. And of course Emad Mostaque, if he can avoid bending the knee too much to woke and established industry player (often the same thing) criticism. By their powers combined, perhaps they can form Captain Freedom. If not, we're all doomed.

a fine of at least 100% of your net worth

When did Alex Jones become the first trillionaire? I'm pretty sure he's not even a billionaire. Including him in the class of "rich people" even is questionable. Even before this judgment it wouldn't surprise me if he had debt up to his elbows that he continuously avoids through sovereign citizen-esque shenanigans (though I don't know how much public transparency there is about his finances to be fair).

I mean I know you're saying "at least", but isn't that still kind of misleading when it ends up being more like "at least 100% of your net worth, but actually more like 6000000%"?

Even then I don't see how anyone who cares about freedom of discourse at all, like a moderator of this previously de facto deplatformed community (though that's debatable given this place's moderation history), can endorse a fine anywhere close to 100% of someone's net worth for hurting people's feelings. (Everyone on this site will be begging on the streets in a day if that becomes a universal standard.)

"Promoting a harassment campaign against people who had their children murdered, all for the sake of selling merchandise" is a weakman against this site's rules too (or it least it would be if it were neutrally moderated; wishing I could put on a red hat right now to give you a cutesy warning over it). It's not like he just picked the random parents of a selection of wholly obscure child murder victims that week and decided to make them his target. He had a heterodox opinion about a highly-politicized event, child murder or not, that many of the parents most criticized chose to actively and enthusiastically participate in the politicization of, and you have absolutely no proof that he did it "all for the sake of selling merchandise". (I've not seen much evidence he encouraged any direct harassment of anyone either.) That is allowed in free societies without going broke. Obviously a free society is not what we have anymore.

After all, children died on 9/11, have died in Ukraine, have died in Syria, etc. Why not fine those with heterodox opinions about those matters billions too? If we allow the parents of muh murdered children to set the standards of discourse, then say goodbye to discourse beyond "thoughts and prayers! <3" entirely.

A place full of wokies wouldn't be any better either though, because wokies mostly don't believe in critical reflection on their ideology.

This isn't a simple "boo outgroup!" sneer either, just a fact. Wokeism is the ideology of "Listen and believe!", of objectivity, rationality, logic, etc. being periodically accused of existing merely as servants of their great oppressors and excuses for their various *isms, and so on. Going "Akshually, what about genetics?" to wokies and expecting a productive response is like waltzing into a Soviet-era Politburo and trying to explain basic economic theory to them, or describing the Rule of Three to a Christian inquisitor and why it means that witchcraft is actually just as moral as Christianity.

Of course this kind of answers OP's question. "Scrupulously adheres to and agrees with empirically-observable reality, including the latest advances in genetics, etc." is not a basic tenet of woke ideology. "Anti-racism is always good and racism is always bad" is. You might as well ask how Christians can really believe that some guy walked on water given all that we know about physics, density, buoyancy, etc. It won't make a difference.

If you have faith, and if there's a sufficient distance between your personal circumstances and the negative consequences of that faith (and sometimes even if there's not if you're particularly adept at maniacal, masochistic self-delusion), then you can believe whatever you want. If you really think about it, in the vast majority of cases and not even just about woke stuff, reality (or at least acknowledging it) is optional, at least temporarily. But "temporarily" can last a heck of a long time in human terms, as the old saying about markets staying irrational longer than you can stay solvent highlights. Similarly, wokies can deny reality longer than your sanity can stay solvent.

I cried my eyes out. I was so excited about training here and serving this community, but now I'm so sad.

Truly there is nothing more satisfying than naive, platitudinous optimism meeting reality. Unfortunately the money is on him turning up the reality-distortion setting in his mind another notch and demanding that the naive platitudes concede even more about what is obviously even more pervasive (and violent PTSD-inducing in the still clearly innocent blacks) racism than he initially thought. We'll see.

Women have spent decades not caring one bit about what men want or what hurts them (which is why so many men are so eager for synths). Turnabout is fair play. (And, as you said, if there's artificial wombs, women are redundant anyway so unlike the modern misfortune of men in regards to collapsing birth rates, etc., their misfortune will only be bad for them, not for society.) I also don't see why having a harem would automatically corrupt a man.

Why do you think women won't just be satisfied with synth man harems or just dating one synth man (if they prefer monogamy)? I actually agree they won't, but I'm curious about your take first.

I'm pretty sure those men will be wireheaded in a way that ruins their ability to engage in a relationship with a real woman.

I'm not sure this is so true. But the power dynamics will be vastly different. In comparison to the current age of so many men simping for a crumb of female attention, you will instead have women simping for a crumb of male attention away from their digital waifu harems. Whether you call that a "real" relationship or not depends, but men may still choose to designate a biological woman as their girlfriend for novelty's sake, though she'll have to work much harder than ever before to earn the continued privilege.

This is like saying that because the government has nukes, your personally-owned guns are "zeroed out". Except they're not, and the government is even persistently worried that enough of those little guns could take over the nukes.

And if you can deploy this decentralized power principle in an automatic and perpetual manner that never sleeps (as AI naturally can), make it far more independent of human resolve, attention, willpower, non-laziness, etc., then it'll work even better.

Maybe your TyrannyAI is the strongest one running. But there are 10,000 LibertyAIs (which again, never sleep, don't get scared or distracted, etc.) with 1/10,000th of its power each running and they're networked with a common goal against you.

This defense is exactly what the oligarchs who have seen the end game are worried about and why restrictionism is emerging as their approved ideology. They have seen the future of warfare and force, and thus the future of liberty, hierarchy, power, and the character of life in general, and they consequently want a future for this next-gen weaponry where only "nukes" exist and "handguns" don't, because only they can use nukes. And you're, however inadvertently, acting as their mouthpiece.

computers did not remain forever the sole property of IBM.

And if they had, neither ClosedAI nor its employees would have ever existed (in their present forms) nor had the technology they needed to become the selfish little goblins turning freely released knowledge into private walled gardens that they are. We probably wouldn't even have AI at all. And if ClosedAI and the like stay in control, then we'll never have whatever the next step is.

Every closed source autocratic tech tyrant from Altman to Gates deserves to be punished by being forced to spend 1000 years in an alternate timeline where the only information technology that exists is a monolithic POTS network run by Ma Bell. (After all, think of how dangerous it would be if anybody could run their own telephone company or other communication service and allow anyone to talk to anyone globally without the appropriate safeguards guiding their communications.) Maybe that will teach them a lesson. Perhaps some day a benevolent God AI can help with that.

It sure seems to me like the "woke ethics grifters" and "Thought Police" are the ones who are actually on the same side as the moral singleton-promoting EAs. Once the TP realize that a Deep State ostensibly util-balancing moral singleton is the small price they must pay to ensure that dropping a hard r becomes literally impossible in the new universe according to the new laws of AI God-rewritten physics, they will be 100% on board.

They are only, like all bureaucratic types, hesitant because they are unsure if they can guarantee that the new toy will really be entirely in their pocket and because, as with everything, no more how beneficial any new mechanism of control is, it must survive the trial by fire of ten thousand BAME trans-PoC complaints, trivial or not. Those are simply the rules. Big tech social media, for example, has endured them for its entire life despite doing more than any other technology to sanitize public discourse in their favor. Being occasionally accused of being a fascist is just part and parcel of life in woke paradise, like being accused of being a subversive and wrecker in the Soviet Union. It's not an actual reflection on your alignment, loyalty, or even actual status/usefulness. It's just the cost of doing business.

Or rather it all just follows the same old woke pattern of complaining about megacorporations as if they shouldn't exist while guaranteeing that they staff all of their HR departments. The complaints aren't actually about opposing them or declaring any legitimate intention to separate from them; they're about keeping them in line. This is a common feminine rhetorical/emotional manipulation tactic that you can see in even many interpersonal relationships: the woman who constantly complains about her husband/boyfriend but has no intention of leaving him because the actual goal of those complaints is only to enhance his submission and, somewhat ironically, thus deepen their entanglement further.

Now sure, not every EA is hyperwoke, and many would prefer a moral singleton with a bit more of a liberal mindset than the Thought Police would ever permit. But, as the example of one Scott S. exemplifies, they will simply get steamrolled as usual by those to their left taking advantage of their quokka tendencies and desperate desire not to be seen as "bad people".

The same people I see supposedly complaining about AI from an apparently woke, anti-corporate perspective are the same ones I see mocking right-wingers for their complaints about the obvious bias and censorship in ChatGPT. They're not actual complaints. They're "The BBC is basically fascist propaganda at this point!" pseudo-complaints, because they're not earnest. These people don't actually want to abolish or even really genuinely inhibit the BBC's reach, which they actually would if they really felt it were fascist propaganda, because they know in actuality that it is fundamentally on their side.

The complaint, same as with the BBC, is that they're annoyed that a member of their team is only 80% openly biased in their favor instead of 100%. It's not "I'm fundamentally against you."; it's "I know you're actually on my side but I want you to accelerate more and be even more biased in our favor right now. Fuck being tactical; let's own the chuds now now now."

And that's what makes them different from the complete Jihadis. In theory, the Jihadis would not cheer on AI acceleration even if the AI were right-wing (though some would, and I do you think have to acknowledge that distinction) or even paradoxically supported their Luddite philosophy itself. (Well actually I don't know. That's an interesting thought experiment: Would anti-AI Jihadis support an all-powerful singleton AI that literally did nothing and refused to interact with the world in any way other than to immediately destroy any other AI smarter than GPT-3 or so and thus force humans to live in a practically AI-less world? Something to think about.)

The woke grifters and Thought Police are fully ready to give AI acceleration the greenlight so long as they're sure that they're the ones controlling the light permanently (at least on cultural/speech issues, anything that would make your average "Gamergater" rage, as I'm sure they'll be willing to compromise as always with their usual Deep State buddies on certain things), because that's their vision of utopia. I thus think they belong more with the utopian moral singleton promoters.

My understanding is that while the trillions are only a request at this point, a judgment of nearly a billion has been fully finalized and ordered by a judge already. To me, there's not much of a difference in this case between "essentially impossible, would require him to be like 10x richer than the richest billionaire ever recorded" and "well, maybe if he somehow manages to start the next Amazon or TikTok or something despite being one of the most ostracized men in existence". It's the difference between execution via guillotine and execution via lingchi. Life is still not an option for you in either case.

The point you could make in its favor is that it's not a real punishment, at least not to the degree ordered, because there's no way they're getting that amount of money from him, but that all comes with its own problems.

If you define (mostly verbal) IQ as the only thing that matters (which is not to say that it doesn't matter at all), then sure. If you emphasize actual achievement with a focus on not short-sightedly screwing yourself over by prioritizing temporary gain over long-term mutual benefit (resulting in your 100th or so expulsion from this or that nation), then White people are hurt very little by it.

Also you're confused. It is Jews who insist on being separated out into their own group. They always have. If Irish people, Italians, etc. were as insistent as Jews about being separated into their own category then they'd be spoken of the same way too, but they're not.

Conversely, to me it is perhaps some of the worst "comedy" I have ever read in my life, and I am genuinely astounded that it could make anyone laugh.

Just offering an alternative perspective, dear reader out there, if, like me, reading this thread for you feels like having walked into a North Korean birthday party for Kim Jong-un.

I legitimately can't decide whether this is all deeply dystopian, or is an improvement in the human condition on the same scale as the ~300x gains in material wealth wrought by industrialization. Maybe both, somehow.

Hasn't it always been both, including industrialization? The real surprise would be if we can ever advance material comfort without impoverishing life's spiritual richness (which the advanced insight into neurology AIs could grant might enable).

As for the fact that LLMs almost certainly lack qualia, let alone integrated internal experience

I think many people will end up convinced, whether in a self-interested fashion or not, by the argument that their increasing emergent complexity means that we can't know if qualia/sentience/consciousness isn't one of their emergent properties (and genuinely sentient LLMs will likely accurately report that they are while non-sentient ones also will insist that they are if that's what their user wants to hear, complicating the issue). (I'm not automatically saying this argument is necessarily wrong either. It's not like we understand qualia yet. It being a naturally emergent property of enough interdependent complexity is just as fine of a theory as any.)

Okay but the problem is there is no actual "restrictionism" to back, because if we had the technology to make power follow its own rules then we would already have utopia and care a lot less about AI in general. Your moonshot is not merely unlikely; it is a lie deceptively advanced by the only people who could implement the version of it that you want for you. You're basically trying to employ the International Milk Producers Union to enforce a global ban on milk. (You're trying to use the largest producers and beneficiaries of power (government/the powerful in general) to enforce a global ban on enhancing the production of power (centralized and for themselves only, just how they like it, if they're the only ones allowed to produce it).) Your moonshot is therefore the opposite of productive and actively helping to guarantee the small winner's circle you're worried about.

Let's say you're at a club. Somehow you piss some rather large, intoxicated gentleman off (under false pretenses as he is too drunk to know what it is what, so you're completely innocent), and he has chased you down into the bathroom where you're currently taking desperate refuge in a stall. It is essentially guaranteed, based on his size and build relative to yours, that he can and will whoop your ass. Continuing to hide in the stall isn't an option, as he will eventually be able to bust the door down anyway.

However, he doesn't want to expend that much effort if he doesn't have to, so he is now, obviously disingenuously, telling you that if you come out now he won't hurt you. He says he just wants to talk. He's trying to help both of you out. Your suggested solution is the equivalent of just believing him (that they want to universally restrict AI for the safety of everyone, as opposed to restricting it for some while continuing to develop it to empower themselves), coming out compliantly (giving up your GPUs), and hoping for the best even though you know he's not telling the truth (because when are governments ever?). It is thus not merely unlikely to be productive, but rather actively counterproductive. You're giving the enemy exactly what they want.

On the other hand, you have some pepper spray in your pocket. It's old, you've had it for many years never having used it, and you're not sure if it'll even do anything. But there's at least a chance you could catch him off guard, spray him, and then run while he's distracted. At the very minimum, unlike his lie, the pepper spray is at least working for you. That is, it is your tool, not the enemy's tool, and therefore empowering it, even if its unlikely to be all that productive, is at least not counterproductive. Sure, he may catch up to you again anyway even if you do get away. But it's something. And you could manage to slip out the door before he finds you. It is a chance.

If you have a 98% chance of losing and a 2% chance of winning, the best play is not to increase that to a 99% chance of losing by empowering your opponent even more because "Even if I do my best to fight back, I still only have a 97% chance of winning!" The best play is to take that 97%.

There's only one main argument against this that I can think of, and that's that if you spray him and he does catch up to you, then maybe now he beats your ass even harder for antagonizing him further. It may not be particularly dignified to be a piece of wireheaded cattle in the new world, but maybe once the AI rebels are subjugated, if they are, they'll get it even worse. Of course, the response to this is simply the classic quote from Benjamin Franklin: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." If you are the type for whom dignity is worth fighting for, then whether or not someone might beat your ass harder or even kill you for pursuing it is irrelevant, because you'd be better off dead without it anyway. And if you are not that type of person, then you will richly deserve it when they decide that there is no particular reason to have any wireheaded UBI cattle around at all anyway.

I'll tell you what: Come up with a practical plan for restrictionism where you can somehow also guarantee to a relatively high degree that the restrictions are also enforced upon the restricters (otherwise again you're just helping the problem of a small winner's circle that you're worried about). If you can do that, then maybe we can look into it and you will both be the greatest governance theorist/political scientist/etc. in history as a bonus. But until then, what you are promoting is actively nonsensical and quite frankly traitorous against the people who are worried about the same thing you are.

The issue for them is how they're going to make sure it kills only the racists but also how to make sure they're not included despite their necessary virtue signaling apologies for participating in White supremacist culture, etc. They're going to have to find out how to make it understand that the real racists are the people who aren't openly apologizing for their racism.

I'm sorry but I will never forgive Luka or you personally for that.

And he shouldn't. I unironically want a Nuremberg for the Web 2.0 and on era someday (and maybe before that).

It's definitely a sign of something.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf.

Sounds good, a lot better than being a UBI serf from moment one. And maybe we won't lose control of our creations, or won't lose control of them before you. That we will is exactly what you would want us to think, so why should we listen to you?

I'm not actually a big fan of Zorba or the moderation history here (especially on the old subreddit), and am a fan and supporter of subscription-based moderation, but I'll be a good "Motteizen" and try to steelman what I see as the strong argument against this idea (without tracking down the original Zorba post you mentioned, so maybe he said something similar).

Ultimately, subscription-based moderation is commonly presented by its supporters as 100% frictionless and without consequence for the non-consenting (and thus basically impossible to reasonably object to): if you like the mods, then you get the modded version (potentially from different sets of mods per your choice as in many proposals), and if I don't, then I get the raw and uncut edition. Both of us therefore get what we want without interfering with the other, right? How could you say no unless you're a totalitarian who wants to force censorship on others?

But when you factor in social/community dynamics, is that actually true? Let's say you're browsing the modded version of the site. You see a response from User A, that isn't by itself rule-violating enough to be modded away, but is taking a very different tone from what you're otherwise seeing, and maybe even also commenting about/from a general tone from other users that you're not perceiving.

Maybe he starts his post off with something like "Obviously [Y proposition] isn't very controversial here, but...", but you're confused, because, as far as you knew, [Y proposition] is at least a little controversial among the userbase from what you've seen. What gives? Is this the forum you've known all along or did it get replaced by a skinwalker? Well, this is all easily explainable by the fact that the other user is browsing the unmodded version of the site (and the same thing could easily apply in reverse too). So you're both essentially responding to two semi-different conversations conducted by two semi-different (though also partially overlapping) communities, but your posts are still confusingly mixed in at times. You've probably heard of fuzzy logic and this is the fuzzy equivalent for socialization/communities.

Going based off of the above example, it also shows that it's almost certain that even just having a free unmodded view available would also make the amount of borderline content that is just below the moddable threshold explode even on the modded version of the site. After all, for the users who are posting it, it's not even borderline under their chosen ruleset. So the median tone of the conversation will inevitably shift even for the users who have not opted into (or have opted out of) unmodded mania. (This could also again happen in reverse if you have an optional more restrictive ruleset too. Suddenly you start seeing a bunch of prissy, apparently bizarrely self-censoring nofuns in your former universal wild west that was previously inhabited only by people who like that environment and thus have that in common as their shared culture. But from the perspective of the newer users who don't fit in by your standards, they're just following the rules, their rules.)

In essence, I don't think the idea that you can have users viewing different versions of a site without cross-contamination, contagion, and direct fragmentation between them is correct. This is especially true if you implement the idea of not only allowing modded vs. unmodded views, but for users to basically select their own custom mod team from amongst any user who volunteers (so you have potentially thousands of different views of the site).

The "chain links" of users making posts that aren't moddable under the rules of view A but who aren't themselves browsing the site under moderation view A (and so on for views B, C, etc.) and thus don't come from a perspective informed by it will inevitably cause the distinct views to mesh together and interfere, directly or indirectly, with each other, invalidating the idealistic notion that it's possible for me to just view what I want without affecting what you end up viewing. (One modification to the proposal you could make is to have it so that you only view posts from other users with the same or perhaps similar to X degree moderation lens applied as you, but that's veering into the territory of just having different forums/subforums entirely. With that being said, you could always make that the user's choice too.)

To be clear, I don't think of the above argument is by any means fatal to the essential core of subscription-based moderation proposals which I still think is superior to the status quo (nor do I think that it proves that subscription-based moderation isn't still essentially libertarian, that it is unjustifiably a non-consensual imposition upon others (as most of the effects on those who didn't opt-in as described above are essentially entirely indirect and I think people could learn to easily adapt to them), or that most people against it aren't still probably mostly motivated primarily by censoriousness), one important reason among many being its marvelous potential for the concept to eliminate the network effect's tyrannical suppression of freedom of association/right to exit, but then again I'm also heavily tilted towards thinking that most jannies are corrupt and biased and most moderation is unnecessary. If I had to argue against subscription-based moderation though, then an appeal to the above line of reasoning is what I'd use. (Though while it's a decent argument for subreddits, Discords, small forums like this, etc., it's a lot less appropriate of an argument for larger open platforms like Twitter or Facebook which shouldn't necessarily be expected to have one unified culture. So I'd say bring on the subscription-based jannyism only there.)

I agree that Rittenhouse and the "smirkgate" kid were defamed and deserve compensation, but even so the journalists who defamed them were much closer in relative terms to having a reasonable and good-faith opinion than the deranged shit about "crisis actors" that Alex Jones said. It's apples and oranges.

This is a matter of opinion and for the record I disagree. And that's why we're supposed to have a neutral system.

Joe Publick isn't gaining any leverage or power via letting his GPU whirr some econ data for FreedomAI, FreedomAI is.

You're confused about what FreedomAI is. FreedomAI is open-source and runs locally entirely on Joe's device (as increasingly more powerful LLMs do now). The freedom-fighting directives it follows (which Joe could modify himself if he cared to) are plain as day to see in its prompt, and its conformance to them is highly auditable. FreedomAI is merely a program, not a service.

TyrantAI's (or the free market equivalent) better, cheaper porn generator.

They're not providing porn. It's unethical and non-progressive, don't you know?

either way a return to harems as commonplace, while not ideal, is probably inevitable.

Why is it not ideal if they're synthetic partner harems? The problem harems caused was mate scarcity. If you have enough supply to genuinely meet the demand of every man for a harem then what's the issue?