@Primaprimaprima's banner p

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users  
joined 2022 September 05 01:29:15 UTC

				

User ID: 342

Primaprimaprima

Bigfoot is an interdimensional being

2 followers   follows 0 users   joined 2022 September 05 01:29:15 UTC

					

No bio...


					

User ID: 342

Only then can you really destroy them by pointing out how ridiculous they are.

But what counts as "ridiculous" in the first place is itself politically determined. I personally think the left acts out in a lot of ways that I would classify as ridiculous, but it plainly hasn't discredited them on a national scale so far. The extent of your pre-established narrative dominance determines whether your particular mode of acting out will be seen as legitimate (BLM riots) or illegitimate (January 6th). So before you goad your opponent into acting out, you have to be in charge of defining what counts as "acting out". I think that's a more fundamental goal.

I'm uncertain if it's even possible for the left to push the pro-Palestine stuff "too far" at this point. I don't know how many centrist swing voters are left in America. Probably still enough to influence the results of national elections, but not enough to uproot the entrenched zeitgeist or really impact the way things are heading overall.

There are new top level posts made in the thread throughout the week. A new thread goes up on Monday so if you post it now we’ll have all of Friday/Saturday/Sunday to discuss it.

If you make the same points but with more tact, you will receive fewer downvotes.

I think it's clear that people frequently (though certainly not exclusively) use votes as a way of expressing agreement/disagreement, despite constant exhortations not to.

People are more forgiving of combativeness when they agree with the actual content of the post. There were posts where Dase really laid into Hlynka, and those posts were still upvoted, because Dase's views are popular here and Hlynka's views are not.

On the flip side, I think this post was written tactfully, but it still ended up in the negatives - in fact I was surprised to see how many downvotes it had given how anodyne it was. Certainly posts that were much less tactful have achieved far higher scores.

I don't think there's anything wrong with acknowledging that the vote button functions as a general boo/yay reaction in many instances. Lord knows that's how I use it a lot of the time. I think it's unavoidable. If you give people a big "I don't like this" button, and they come across a view that they find deeply objectionable, then they're liable to press the button, regardless of how well argued the post is.

I don't think that post was particularly tactful. Starting right off the bat by claiming the person is being weird isn't very tactful, just the opposite.

We might subjectively disagree over how tactful or not it is to call someone's post "weird" in this context. But the point is, I don't think 16 people downvoted that post because it called the parent post "weird". I think 16 people downvoted that post because it questioned how committed Republicans were to the principles of the anti-lockdown cause.

Posts with sharper personal insults than "weird" still manage to accumulate upvotes, if the content itself is popular enough. I already linked one. It's not that hard to find others (from multiple different users).

people who complain about being downvoted for not fitting into the "echochamber" of this place

But these people are simply correct in many cases. In every community with reddit-style voting, posts that disagree with the consensus viewpoint are more likely to be downvoted. This is simply obvious to me based on 15+ years of watching how different internet communities behave, and my knowledge of how I personally use the voting buttons, particularly with posts that provoke a strong emotional reaction from me. I can't recall any significant counterexamples, and TheMotte is no exception.

I want to reiterate that using the vote button as an agree/disagree button isn't a bad thing. It's natural and unavoidable. The solution is to simply not have any punishment associated with a low comment score. It's already a good first step that TheMotte doesn't hide low scoring comments like reddit and HN do, and I think we should remove the rate limiting as well.

“heads I win tails you lose” electionering will result in the person doing it’s total annihilation.

How does this describe Trump though?

He lost the 2020 election. He voluntarily stepped down from power and his challenger assumed power. There was no nefarious electioneering. He just lost and we moved on. I don't understand what the problem is.

Well he was literally banned for that comment.

It does seem like your comments require mod approval though, presumably due to having received too many downvotes, which I agree is very bizarre and that feature should be disabled.

Much of the core messaging on the right is explicitly 'anti-agency,'

As opposed to... the core pro-agency messaging of the left?

Obviously no one believes that individuals can act completely unconstrained by external factors. Whatever "pro-agency" ultimately means, it doesn't mean that.

I can see how you might construe the right's general fatalism regarding inequality as being anti-agency, but as has been painstakingly reiterated numerous times on this forum, HBD itself is policy-neutral. Recognizing the reality of genetic limitations is no different from recognizing gravity: it's a fact, and it doesn't care about your feelings, so you may as well get used to it. You're welcome to try to circumvent it anyway, and you might end up inventing the airplane in the process. But don't be shocked if you fail.

When is the last time a politician or right-wing influencer told someone from West Virginia that they have the power to improve their life by relocating, retraining or abstaining from drugs?

I can't think of any explicit instances to cite right now (except maybe some old JBP "clean your room" lectures), but I'm certainly happy to say it for them: poor people from West Virginia have the power to improve their lives by relocating, retraining, and abstaining from drugs. If they don't/can't do those things then that's on them.

I have to admit I'm in the company of a lot of witches who really just want a place they can spam the n-word, and the communities created by that second group are likely going to suck.

We already know what such a community would look like, it's called 4chan. It's one of the most influential internet communities ever and has been an endless source of entertainment and fascination for me for the past 15 years.

This kind of comment would be perfect for the Transnational Thursday thread.

with the Right-wing critique of Zionism growing in influence among younger audiences.

Are we sure about that? Certainly the left-wing critique of Zionism is growing in influence, but I'm not sure about the right-wing critique (to say nothing of explicitly DR ideas). I get the impression that when young people are critical of Israel, it's overwhelmingly for progressive reasons: Jews are white colonialists who are oppressing the non-white Palestinians, and opposing Israel is part of the broader struggle for racial justice. Right-wing inflected critiques of Israel seem to me to be as fringe as ever in the mainstream conscience. But I do agree with your general point that even "normie" right-wing media has become edgier recently; Fox News is a lot more willing to say "white" than they were a few years ago.

I literally had a dream last night that Hlynka was unbanned.

In general I’ve never found dreams to be particularly interesting, my own or anyone else’s, and I’ve always been puzzled over why they held such fascination for certain thinkers like Freud. Usually their contents are either nonsensical, or they’re connected to waking events/thoughts in a relatively straightforward way.

Have you had any interesting dreams lately? What do you think about dreams in general?

Beijing Pushes for AI Regulation - A campaign to control generative AI raises questions about the future of the industry in China.

China’s internet regulator has announced a campaign to monitor and control generative artificial intelligence. The move comes amid a bout of online spring cleaning targeting content that the government dislikes, as well as Beijing forums with foreign experts on AI regulation. Chinese Premier Li Qiang has also carried out official inspection tours of AI firms and other technology businesses, while promising a looser regulatory regime that seems unlikely. [...]

One of the concerns is that generative AI could produce opinions that are unacceptable to the Chinese Communist Party (CCP), such as the Chinese chatbot that was pulled offline after it expressed its opposition to Russia’s war in Ukraine. However, Chinese internet regulation goes beyond the straightforwardly political. There are fears about scams and crime. There is also paternalistic control tied up in the CCP’s vision of society that doesn’t directly target political dissidence—for example, crackdowns on displaying so-called vulgar wealth. Chinese censors are always fighting to de-sexualize streaming content and launching campaigns against overenthusiastic sports fans or celebrity gossip. [...]

The new regulations are particularly concerned about scamming, a problem that has attracted much attention in China in the last two years, thanks to a rash of deepfake cases within China and the kidnapping of Chinese citizens to work in online scam centers in Southeast Asia. Like other buzzwordy tech trends, AI is full of grifting and spam, but scammers and fakes are already part of business in China.

/r/singularity has already suggested that any purported AI regulations coming from China are just a ruse to lull the US into a false sense of security, and that in reality China will continue pushing full steam ahead on AI research regardless of what they might say.

Anyway the main reason I'm posting this is to discuss the merits of the zero-regulation position on AI. I've yet to hear a convincing argument for why it's a good idea, and it puzzles me that so many people who allegedly assign a high likelihood to AI x-risk are also in favor of zero regulation. I know I've asked this question at least once before, in a sub-thread about a year ago, but I can't recall what sorts of responses I got. I'd like to make this a toplevel post to bring in a wider variety of perspectives.

The basic argument is just: let's grant that there's a non-trivial probability of AI causing (or being able to cause) a catastrophic disaster in the near- to medium-term. Then, like many other dangerous things like guns, nukes, certain industrial chemicals, and so forth, it should be legally regulated.

The response is that we can't afford to slow progress, because China and Russia won't slow down and if they get AGI first then they'll conquer us. Ok, maybe. But we can still make significant progress on AI capabilities research even if its use and deployment is heavily regulated. It would just become the exclusive purview of the government, instead of private entities. This is how we handle nukes now. We recognize the importance of having a nuclear arsenal for deterrence, but we don't want people to just develop nukes whenever they want - we try to limit it to a small number of recognized state actors (at least in principle).

The next move is to say, well if the government has AGI and we don't then they'll just oppress us forever, so we need our own AGI in order to be able to fight back. This is one of the arguments in favor of expansive gun rights: the citizenry needs to be able to defend themselves from a tyrannical government. I think this is a pretty bad argument in the gun rights contexts, and I think it's about as bad in the AI context. If the government is truly dedicated to putting down a rebellion, then a well regulated militia isn't going to stop them. You might have guns, but military has more guns, and their guns are bigger. Even if you have AGI, you have to remember that the government also has AGI, in addition to vastly more compute, and control of the majority of existing infrastructure and supply lines. Even an ASI probably can't violate the conservation of matter - it needs atoms to get things done, and you're competing with hostile ASIs for those same atoms. A cadre of freedom fighters standing up to the evil empire with open source models just strikes me as naive.

I think the next move at this point might be something like, well we're on track to develop ASI and its capabilities will be so godlike and will transform reality in such a fundamental way that none of this reasoning about physical logistics really applies, we'll probably transcend the whole notion of "government" at that point anyway. But then why would it really matter how much we regulate right now? Why does it matter which machine the AI god gets instantiated on first? Please walk me through the specifics of the scenario you're envisioning and what your concerns are. At that point it seems like we either have to hope that the AI god is benevolent, in which case we'll be fine either way, or it won't be, in which case we're all screwed. But it's hard to imagine such an entity being "owned" by any one human or group of humans.

TL;DR I don't understand what we have to lose by locking up future AI developments in military facilities, except for the personal profits of some wealthy VCs.

Slow news day? Guess I'll ramble for a bit.

Scientists shamelessly copy and paste ChatGPT output into a peer-reviewed journal article, like seriously they're not even subtle about it:

Introduction

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2]. However, during the cycle, dendrites forming on the lithium metal anode can cause a short circuit, which can affect the safety and life of the battery [3], [4], [5], [6], [7], [8], [9].

This is far from an isolated incident - a simple search of Google Scholar for the string "certainly, here is" returns many results. And that certainly isn't going to catch all the papers that have been LLM'd.

This raises the obvious question as to why I would bother reading your paper in the first place if non-trivial sections of it were written by an LLM. How can I trust that the rest of it wasn't written by an LLM? Why don't I cut out the middle man and just ask ChatGPT directly what it thinks about lithium-metal batteries and three-dimensional porous mesh structures?

All this fresh on the heels of youtube announcing that creators must now flag AI generated content in cases where omitting the label could be viewed as deceptive, because "it would be a shame if we (youtube) weren't in compliance with the new EU AI regulations", to which the collective response on Hacker News was "lmao okay, fair point. It would be a shame if we just lied about it."

It would be very boring to talk about how this represents a terminal decline in standards and the fall of the West, and how back in my day things were better and people actually took pride in their work, and how this is probably all part of the same vast conspiracy that's causing DEI and worse service at restaurants and $30 burgers at Five Guy's. Well of course people are going to be lazy and incompetent if you give them the opportunity. I'm lazy and incompetent too. I know what it feels like from the inside.

A more interesting theoretical question would be: are people always lazy and incompetent at the same rate, across all times and places? Or is it possible to organize society and culture in such a way that people are less likely to reach for the lazy option of copy and pasting ChatGPT output into their peer-reviewed journal articles; either because structural incentives are no longer aligned that way, or because it offends their own internal sense of moral decency.

You're always going to have a large swath of people below the true Platonic ideal of a 100 IQ individual, save large scale genetic engineering. That's just how it goes. Laziness I'm not so sure about - it seems like it might be easier to find historical examples of it varying drastically across cultures. Like, the whole idea of the American Revolution is always something that blew my mind. Was it really all about taxes? That sounds like the very-slightly-true sort of myth they teach you in elementary school that turns out to be not-actually-true-at-all. Do we have any historians who can comment? Because if it was all about taxes, then isn't that really wild? Imagine having such a stick up your ass about tax hikes that you start a whole damn revolution over it. Those were not "lazy" men, that's for sure. That seems like the sort of thing that could only be explained if the population had vastly different genetics compared to contemporary America, or a vastly different culture; unless there are "material conditions" that I'm simply not appreciating here.

Speaking of material conditions, Brian Leiter recently posted this:

"Sociological idealism" was Charles Mills's term for one kind of critique of ideology in Marx, namely, a critique of views that, incorrectly, treat ideas as the primary cause of events in the socio-economic world. Marx's target was the Left Young Hegelians (whose heirs in the legal academy were the flash-in-the-pan Critical Legal Studies), but the critique extends much more widely: every day, the newspapers and social media are full of pontificating that assumes that ideas are driving events. Marx was always interested in the question why ideologists make the mistakes they do.

Marx's view, as far as I can tell, was that ideas (including cultural values and moral guidelines) should be viewed as casually inert epiphenomena of the physical material and economic processes that were actually the driving forces behind social change. I don't know where he actually argues for this in his vast corpus, and I've never heard a Marxist articulate a convincing argument for it - it seems like they might just be assuming it (but if anyone does have a citation for it in Marx I would appreciate it!).

If Marx is right then the project of trying to reshape culture so as to make people less likely to copy and paste ChatGPT output into their peer-reviewed journal articles (I keep repeating the whole phrase to really drive it home) would flounder, because we would be improperly ascribing the cause of the behavior to abstract ideas when we should be ascribing it to material conditions. Which then raises the question of what material conditions make people accepting of AI output in the first place, and how those conditions might be different.

I know everyone loves to hate this show

That's largely the intended reaction.

Always remember: if you're watching it, you are the target audience. Even if you're only watching it "ironically".

I've strongly disagreed with Dase, or well, did, before he blocked me in a hissy fit

Hey he blocked me too (for a time). If we ever add achievements to the site, one of them should be "Get blocked by Dase".

But at that point, you are more concerned with the alignment of the operators, whose wishes are faithfully reproduced. Are said operators well-disposed towards you?

I agree that's worth asking. But in a true zero regulation scenario, where everyone has access to a personal AGI/ASI, you have a lot more operators to worry about - now you have to worry about how well disposed the entire rest of humanity is towards you. If you give everyone the nuke button, someone is going to push it for shits and giggles.

At least OAI and Anthropic are on record stating that they want to distribute the bounties of AGI to all. While I'm merely helpless in that regard were I to choose to doubt them, I still think that's more likely to turn out well for me than it is if it's the PLA who holds the keys to the universe. Even the USGov is not ideal in that regard, though nobody asked me for my opinion.

I probably trust the US government more than Sam Altman. But regardless, Zvi mentions in this post that there are engineers and execs at multiple leading AI labs who wish they didn't have to race ahead so fast, but they feel like they're locked in a competition with all the other labs that they can't escape. I think that nationalizing the research and eliminating the profit motive could help relieve this pressure.

"Right Wing" does not mean "religious." There's a correlation between the two, obviously, but imo that's more the result of history than philosophical alignment.

I believe that a certain type of magical thinking is, if not a necessary component of the rightist personality, then at least a prominent and salient feature of it across multiple diverse manifestations. (I raised the question here recently of whether there was actually something to leftist accusations of "right-wing conspiracy theories", the question of whether the rightist mind might actually be more prone to conspiratorial thinking.)

Nietzsche is the archetypal example to study here. In terms of his explicitly avowed philosophical commitments, he was the arch-materialist, not only denying God but also any notion of value (aesthetic or moral), free will, a unified conscious "self" that could be responsible for its actions, and at times he seemed to suggest that even the concept of "truth" had too much supernatural baggage and should be rejected on those grounds. And yet throughout his work he couldn't stop himself from making constant reference to the inner states of man's "soul", relying on analogies and parables that featured Greek gods and demons, judging people by a standard of authenticity which on any plain reading he should have been forced to reject, and courting overt mysticism with his concept of the "eternal recurrence". This was a fundamental psychological tendency expressing itself, a yearning for a reality which he could not explicitly avow. Not only could he not excise these concepts from his thinking but they were essential to him, it was the fiat currency of his psychic economy.

Or look at Heidegger who, despite having a complicated relationship with Christianity and attempting to distance himself from it, and heavily critiquing Cartesian dualism in his early work, ended up throwing himself head-on into mysticism in his later works (for example his lectures on Hölderlin).

This passage from Heidegger's Country Path Conversations is illuminating:

GUIDE: Perhaps even space and everything spatial for their part first find a reception and a shelter in the nearing nearness and in the furthering farness, which are themselves not two, but rather a one, for which we lack the name.

SCHOLAR: To think this remains something awfully demanding.

GUIDE: A demand which, however, would come to us from the essence of nearness and farness, and which in no way would be rooted in my surmise.

SCIENTIST: Nearness and farness are then something enigmatic.

GUIDE: How beautiful it is for you to say this.

SCIENTIST: I find the enigmatic oppressive, not beautiful.

SCHOLAR: The beautiful has rather something freeing to it.

SCIENTIST: I experience the same thing when I come across a problem in my science. This inspires the scientist even when it at first appears to be unsolvable, because, for the scientist faced with a problem, there are always certain possibilities for preparing and carrying out pertinent investigations. There is always some direction in which research can knuckle down and go toward an object, and thus awaken the feeling of domination that fuels scientific work.

SCHOLAR: By contrast, before the enigma of nearness and farness we stand helplessly perplexed.

SCIENTIST: Most of all we stand idle.

GUIDE: And we do not ever attend to the fact that presumably this perplexity is demanded of us by the enigma itself.

If there is such a thing as an identifiable core of the "rightist mind", I believe it consists in finding the enigmatic beautiful rather than oppressive.

(I cite these examples because, rather than being the psychological eccentricities of a few individuals, I observe the same patterns in contemporary rightists, albeit in an attenuated form.)

I would be happy to trade complete restrictions on public AI research for complete control of society until the AI God arrives. Would that be a trade you'd be interested in?

I'm not sure if I understand the question, or how it's related to the section you quoted.

On a basic level I'd be willing to hand control of society over to virtually any individual or group if it meant being able to live in a reality where machine learning was impossible. You can be the king, the progressives can be the kings, it doesn't matter.

The only thing that might give me pause would be the concern that such a decision would betray a lack of courage on my part.

The reason 'tech' has gotten so far without being regulated is simply because Gov't doesn't understand it

I hear this a lot, but is it actually true?

Relatively few people in government have actual professional-level expertise when it comes to finance, manufacturing, workplace safety, international trade, or nuclear energy, but the government seems to regulate those things just fine. (Arguably what we call "tech" is easier to understand than those things, at least the parts of it that are salient for regulation.)

If only one country has a nuclear arsenal, they could conquer the world quite easily. If many countries have nukes, there is no such danger.

Right, there's value in deterrence. But presumably you don't think that every individual on earth should have personal direct access to the nuke button - instead we try to limit that power to a small number of trusted actors. It seems to me that everyone having unrestricted personal access to ASI is the same as giving everyone a direct line to the button.

You cannot arbitrarily improve skill with effort, and even guidance and feedback.

I never said otherwise.

The average person off the street can't become Terry Tao and win a Fields Medal, no matter how much time and effort they put into math. Their brains are physically incapable of getting to that level. We're in agreement on that point.

All I was saying is that drawing ability isn't intrinsically coupled to a superior faculty of visualization, and even the best artists still have to study and practice to reach a professional level of skill.

Regarding how many people are physically capable of becoming competent artists: I don't know the exact number, but I do believe that it's higher than is generally supposed. It's higher than the number of people who are capable of winning a Fields Medal anyway.

We have to distinguish between pure drawing from reference (e.g. portraiture, still life) vs creating original pieces. I think the vast majority of otherwise developmentally normal people are capable of learning the former to a competent level. I've done it, I've seen other people do it. It's a purely rote mechanical skill.

Original pieces are a lot harder but it's not hopeless. I started from absolute zero, I have no intrinsic talent for this. In fact I have anti-talent, progress has been abnormally slow and painful for me. But I was still able to make it this far. Obviously it's not pro level even by anime standards, but I think it's kinda cute and I got a few responses saying as much when I posted it online.

I also don't think I'm anywhere near the inherent ceiling on my abilities. I struggled for years partly because traditional art education is all crap and not at all geared towards people who naturally think analytically. Now that I'm learning how to take apart drawings the same way I take apart programs I'm starting to see a lot of progress and I'm way more confident in my ability to keep going.

I'm surprised at some of the reactions to the "oddness" of Hlynka's views.

It wasn't so much his object-level political views, which as you point out were largely garden-variety conservative talking points that would have been at home on 00s Fox News. What really made him unique was his personality and his discussion style.

He was supremely confident in his own views, and seemingly oblivious to any and all criticism, despite being (in my own personal opinion) supremely wrong about some of those views. He frequently railed against "postmodernism", despite the fact that a simple transcript of his comments would constitute a pretty good experimental postmodern novel in its own right. He insisted that all of his ideological opponents, whether they be Rationalists, woke progressives, fascists, or anything in between, were all really "the same" underneath, in spite of the continued insistence by all of those groups that they had deep fundamental disagreements with each other. He had a habit of simply fleeing from any sub-thread where he was asked to provide direct evidence of his claims; this clashed very noticeably with the "grizzled military veteran, ride the tiger, don't take no shit from no one" personality that he wanted to project. It was this contrast that made him such a frustrating and fascinating character.

I'd rather have a discussion partner who's interesting and wrong, than a boring one who I agree with. In spite of my numerous disagreements with him, I would often check on his profile just to look at his recent comments and see what he was up to. So his ban will constitute a loss for me in that regard.

A shorter version is: people develop emotional attachments to unexpected things, like programming languages. There is no guarantee that these attachments are rooted in economic concerns. Understanding these emotional attachments is important for understanding their behavior.

Do you have any examples of instructional materials that worked well for you?

Not really! I feel like I had to figure out the most important things myself.

These days I'm basically using the same principles you'd find in the Loomis method ("Drawing the Head and Hands" and "Figure Drawing For All It's Worth") but what really helped things start to click for me was tracing the construction directly over references and trying to think, "how would I actually construct this? Do I have a repeatable process I can apply to future drawings like this?"

I also find it helpful to watch videos of pro artists work and look at how they think about things and how they approach problems, like David Finch or this guy.

Game writing was dreck before these consultants and is so now, too. The reason for this is simple - almost all game writers are D&D geeks who almost exclusively read science fiction and fantasy garbage

I occasionally see this self-deprecating tendency among fans of sci-fi and other types of genre fiction, where they assume that there must clearly be some inherent property of classical literature, unbeknownst to the plebians, that sets it apart - that the English majors are hoarding the secret sauce for what makes a work "actually good". I assure you that they're not.

The average work of canonical literature is, in my opinion, not that good, and most of these works have "stood the test of time" only due to accidents of history, rather than their own intrinsic merits. This isn't because of any particular failing on the part of the writers or critics involved, but is instead a simple corollary of the fact that the majority of works in any domain will tend towards mediocrity. The average sci-fi story ranges from "meh" to "ok", just like how the average work of "literary" fiction ranges from "meh" to "ok". It's debatable how many truly Great Books have ever even been written - think of how many physics books/articles throughout history have truly advanced the frontier of understanding in a deep and meaningful way, compared to the mountain of unread and irrelevant papers produced each year to feed the tenure committee machine. All domains of human activity function in essentially the same way, including art, including "high" art.

Of course I'm by no means advocating for total aesthetic anarchism. Some works are better than others; some works are really bad and some works are really great. And being conversant in artistic theory and the history of art will help artists produce better works instead of worse ones. I just want to be careful that we're not engaging in a knee-jerk elevation of the classical just because it's classical. In fact 20th/21st genre fiction has made clear advancements that were largely undreamt of in previous eras of literature, particularly in terms of the range of plot structures and character types that it treats.

way overtending this garden

I don't think this is an entirely baseless claim, but I do think the mods generally do a great job of modding for tone and not content, which is what we want from them. This is the only public forum I'm aware of where Nazis and progressives both make regular contributions to the discussion. We want to curate and preserve a space for that kind of ideological diversity, and if the cost of that is that the tone of the average post becomes somewhat more stilted, then that's an acceptable loss. A complete withdrawal of moderation would lead to people with minority viewpoints self-selecting out of the discussion even more than they already have.

Most of the discussion here just sounds like (and I suspect heavily is) chatbots talking back and forth to one another.

Many of the regulars were posting here before ChatGPT (or even before GPT-3).