This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I’ve always thought the US credit scoring system is bullshit. I have bad credit so I can’t get an apartment without a co-signer. But I’ve been renting for over two decades without a single missed payment. But the credit score system doesn’t care. It cares about this very specific thing and the outcome leads to massive problems in many sectors of life.
I asked ChatGPT to tell me my own thoughts and I gotta say it writes what I feel in a way that I never could.
Not sure if this makes me feel better or worst about my odds of ever writing a novel.
Here’s what it had to say:
The U.S. credit score system, despite being rooted in capitalist financial structures, exhibits characteristics that can be interpreted as communist in nature due to its centralized control, opaque evaluation methods, and enforcement of collective responsibility. At its core, the credit system is managed by a small number of powerful credit bureaus—Equifax, Experian, and TransUnion—that operate in a monopolistic manner, much like a state-controlled entity. These institutions dictate the financial credibility of individuals based on proprietary algorithms, without allowing for much transparency or individual input. This centralized control mirrors how communist economies manage resources and labor distribution, where individuals are assessed by state-run entities rather than free-market competition determining their value. In this way, the credit score system creates a top-down financial structure that limits individual autonomy and enforces a rigid classification of economic worth.
Moreover, the credit system functions as a tool of enforced social compliance, much like communist regimes use social credit scores to dictate personal behavior. In the U.S., a person’s credit score affects not only their ability to take out loans but also their access to housing, employment, and even certain services. This creates an environment where individuals must conform to predefined financial behaviors or risk economic marginalization. The system rewards long-term adherence to credit-based spending while penalizing those who choose alternative financial paths, such as relying on cash or avoiding debt altogether. Similarly, in a communist framework, citizens are often required to align with state-approved behaviors to gain access to privileges, reinforcing a collectivist model where individual financial independence is discouraged in favor of participation in a state-monitored economy.
Finally, the U.S. credit score system embodies an element of collective responsibility that is reminiscent of communist ideologies. In many cases, an individual’s creditworthiness is affected not just by their actions but by external factors such as the financial stability of co-signers, the decisions of creditors, and even errors made by the credit bureaus themselves. Disputing inaccuracies in credit reports is often a bureaucratic and difficult process, reflecting the inefficiencies of centralized government planning. Additionally, the system’s reliance on predetermined metrics, rather than an individual’s full financial picture, enforces a uniform standard that does not account for personal circumstances. This mirrors the way communist states often treat workers as indistinct units within a planned economy, rather than as unique individuals with different needs and capabilities. In essence, while the U.S. credit system exists within a capitalist society, its structure and consequences exhibit traits that align with communist principles of control, social engineering, and collective financial assessment.
And:
Building on this idea, another way the U.S. credit score system resembles communist structures is in its lack of true market competition when it comes to consumer choice and mobility. In a capitalist system, individuals are supposed to have the freedom to negotiate their worth based on merit and market demand. However, in the credit system, people are assigned a numerical value that dictates their financial access, and this score is determined by opaque algorithms controlled by a few dominant entities. There is no alternative credit scoring system that consumers can freely opt into, nor is there meaningful competition among credit bureaus that would encourage better service or more accurate reporting. This centralization of financial assessment power forces individuals into a system they cannot meaningfully challenge, much like how communist regimes dictate economic opportunities without room for individual bargaining.
Additionally, the credit system fosters a kind of financial surveillance that is reminiscent of state-controlled economies. Every financial decision—whether it be taking out a loan, missing a payment, or even just inquiring about one’s own credit—is meticulously tracked, recorded, and used to shape an individual’s economic identity. People are not judged solely on their current financial standing but on their entire credit history, meaning past financial struggles can haunt them for years, much like how state-run economic systems often keep long-term records on individuals’ productivity, loyalty, or past infractions. In a true free-market system, individuals would have more agency to reinvent themselves financially without long-standing bureaucratic oversight, but the credit system instead enforces a rigid classification that limits upward mobility for those who have made past mistakes.
Finally, the way the credit system forces people into debt in order to maintain a good score has a collectivist, almost centrally planned economic aspect to it. In theory, responsible financial behavior—such as saving money and avoiding unnecessary debt—should be rewarded, but in practice, the system penalizes those who do not actively participate in borrowing and repaying credit. This creates a paradox where individuals must continuously engage with lenders in order to maintain their economic standing, even if they could otherwise live within their means without debt. In a way, this mirrors communist economic models where individuals are expected to participate in state-run economic structures regardless of personal preference, reinforcing a cycle of dependency rather than true financial independence. The system, therefore, subtly enforces a form of financial collectivism, where personal economic choices are constrained by the overarching structure rather than by free-market principles.
I feel this summed up my thoughts decently, but it lacks my passion of hate I have for our system.
Thoughts, and addons?
(Should I have waited until Monday?)
What thing?
More options
Context Copy link
A common observation by economists is that America's budget is not analogous to personal finances. Credit scores, credit ratings, and deficits do not mean the same thing on a national level as they do on a personal level. Credit worthiness means much more for smaller economics than it does for the US. The US is in a privileged position of creating debt with near impunity.
More options
Context Copy link
I liked the post Dase wrote, machine part included; 'ai slop' is harsh, but I think it only applies here.
More options
Context Copy link
Uh, why do you have bad credit despite living on your own for years? Not paying your bills is in fact correlated with not paying your bills. It's perfectly reasonable to look at credit history when we have a system which gives tenants substantial protections in the event of not paying.
You can find slumlords who'll rent to you without having credit. You can also build your own credit. You are, based on living on your own for years, not a twenty year old who hasn't had time for that- the consequences of your own bad(or at least nonconformist) decisions including 'difficult to rent an apartment' isn't some kind of tragedy.
It’s completely unreasonable to not have home rental payments on your credit report. It’s a history of paying your largest bill - and it doesn’t exist outside of an eviction. So only negative, but never a positive.
Saying people should rent from slumlords if they don’t have credit yet or bad credit is slightly above barbaric thinking.
If you don’t have any credit history, you have good credit, not bad credit. I have arrived in US with no credit history at all, and at no point my credit score was below 700.
I wonder if it's an age thing. When I returned from overseas as an American citizen in my early 20s, my credit score was ~650 despite me never having had a loan or credit card.
If you have literally no credit, your credit score is supposed to be indeterminable. I would suspect that any scores in this case are the result of errors and are basically random. Or you're mistaken about having no credit.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You can easily get a credit card and pay it off every month. It's no different from using a debit card except with some additional fraud protections(in practice, you can overdraft your debit)- and if you've been living on your own for years there's been more than enough time to build up credit by doing it.
The world is as it is. I suspect consumer privacy laws are the reason landlords don't report to credit monitoring agencies but it's literally never been easier to build credit. Live with the constraints we're given- a five minute google search of 'how credit scores are calculated' can tell you basically what you need to do.
I have some sympathy for kids who struggle when they're first out on their own because they don't have credit yet- although with the caveat that most of them are fine getting a parent to cosign or renting a room not an apartment. But full-blown-adults who still have bad or no credit, years later? Your problem stems from either a) making bad decisions(this is far more likely, in practice) or b) refusing to be normal(based on what you're telling me, this is probably you). 'Oddballs looking bad in front of the system because it doesn't know how to analyze them' isn't some kind of tragedy.
I do think that paying rent should be reported for credit purposes if not paying rent is. It's not very fair if something can only ever have downside, but no upside. But otherwise I think you're right - it's very easy to build credit, so if one is refusing to take that step they don't merit a lot of sympathy.
I think there is a way to make rent payments reported to credit agencies, it just requires a lot of paperwork and opt-in. My guess is that there’s tenants privacy rights laws that make it this way(probably not on purpose). Probably this process should be made somewhat easier.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Now is in fact the best time to post some LLM junk, given nobody'll read it! Asking an LLM to make a post for you just does not pass the effort rule.
More options
Context Copy link
I dont know what your intended point here was, but you aren't particularly wrong. White market financial institutions are so heavily regulated by the federal government that they are basically state controlled. Generally I've seen the model of nominal private control of industry but actual government control described as fascism instead of communism. But, I've generally said as well that, at least economically, the fascist is just a wise communist who realizes he can have 99% of the control in exchange for almost none of the blame by embracing this setup instead of total state control. If a government bureaucrat is the head of a bank and it fails, he gets all the blame! But if he merely regulates it into failure he gets to blame the "capitalists"! Its a win win. That is why this sort of approach is generally loved by the worldwide left/progressives in modern times, although they have not been keen to embrace the "fascist" label.
But you are entirely correct, you can't go into a Chase bank and get a loan because the loan officer thinks "oh Mihow is a good seed" because then he has to tell FDIC, DOJ, and other investigators why he denied Deehow the same loan application (hint Deehow is a bad seed). So they make up these standardized metrics which are generally very good at evaluating a person's activities in the white market. But, if you operate in the grey market or black market to any significant extent their evaluation of you will be off. But that isn't their job. If you want to buy a condo on credit using your grey market credentials, go to a grey market lender (almost none exist anymore) or a black market lender. Of course, that might be a crime, but thats the point. The government intends to control lending in the country because it is one of the commanding heights, and it does. This is not capitalism, but what is abstractly described as fascism, but modernly referred to as progressivism.
More options
Context Copy link
I'd prefer vitriol and passion over AI slop. Not reading.
That said, if debt is going to be issued, something like credit scores, implicit or explicit, are pretty much required. People likely to repay debt and who want the conveniences of a good credit score are going to usually get better scores than those who don't.
I have some sympathy for immigrants who come to the US without a credit score and need one. On the other hand, did you miss some payment recently? If so, the credit score is functioning exactly as it should.
More options
Context Copy link
I'm not going to read an AI-generated post. But I did ask an AI to summarize it in a few sentences, so I get the gist. Maybe next time just post your thoughts so others don't have to do this extra round-trip through an AI.
These are my unfiltered thoughts on the object-level issue:
It's not Communism. It's opaque and centralized but historical Communist systems are not unique in those respects.
The credit scoring system is a result of many conflicting interests who all place constraints on how businesses make decisions. Consider what would happen if a business used their own method for evaluating credit risk:
The real question is, what is the alternative, and does it live within the constraints we've placed on how businesses make decisions?
"I don't want to effortpost, so I'll ask a LLM to turn this list of bullet points onto a lengthy argument."
"I ain't reading all that, so I'll ask a LLM to turn it into a list of bullet points."
I don't think @ChestertonMeme used an LLM to generate those bulletpoints. They seem to represent his own thoughts, and they're not ChatGPT-ese.
Yes, his list is his own writing.
I mean:
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I read a fair amount of erotica in my free time. Noticed a few authors being unusually productive last year. Ever since I started paying attention to AI again due to deepseek and hearing that Opus?, Claude and Sonnet iirc were also pretty good at writiing.. I've put 2+2 together.
Seeing this website made it all click! If you're not an idiot you can use LLMs to massively increase your own outputs or even write stuff vastly better than you could ever hope to write (if you're not ~130 IQ etc who can spend decades practicing).
There was a huge outcry about AI image gen due to the furry porn commission types seeing stable diffusion collapse it. I haven't seen almost anyone talking about how bad LLM writing is yet.. even though clearly a lot of people are now using it and they're all shamefully silent about it, the dirty scoundrels!
Going to be some nice drama about it eventually I think once it gets clear someone who wrote a prize used AI to write most of it.
AI-generated writing tends to overuse the passive voice. This gives it away
Purely a skill issue.
More options
Context Copy link
More options
Context Copy link
Isn't it immediately obvious when Sonnet or Opus write something? It's not quite describable in words but you know it when you see it. The diction and tone gives it away.
Even Deepseek has a certain style to it I find. Are the AIs writing the whole thing or are they expanding on user-written text?
I find Claude and DeepSeek far harder to detect than ChatGPT. They have a far more 'human' default style, that doesn't stick out like a sore thumb. That might just be me, but even the average internet user has some ability to detect ChatGPT.
Of course, if you provide excerpts from your own writing and ask it to emulate you, or just refer to a known figure whose writing is in the corpus (Gwern, Scott etc), I wouldn't except to be able to tell unless strongly primed to be suspicious in advance.
More options
Context Copy link
I don't think so.
There is a certain flavor to LLM text but I think the newer models (last 4 months) are good enough to easily avoid that if prompted right. Even deepseek out of the box is a little cringe but it sobers up fast if reminded and remembers.
More options
Context Copy link
At this point someone really should make Scott's AI Turing test but for textgen, basically compile a big list of text excerpts on various topics - literary prose, scientific papers,
fanfictionerotica/NSFW, forum/imageboard posts, etc. from both real texts/posts and AI gens in the style of, and see if people can tell the difference. I consider my spidey sense pretty well-tuned and would be curious to test it.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
All the mentions of communism feel shoehorned, critique it because its bad, not because somehow if you squint it kinda resembles communism slightly even though credit scores could never exist without capitalism.
More options
Context Copy link
I think it's fine to use llms in the writing process but you really really need to take on the role of an editor. This is the same like 3 points repeated a half dozen times and should have be edited down substantially. I do think that the credit system as it exists now is suboptimal but at the same time we do need some system for determining credit worthiness. Part of the problem is how very regulated financial markets are and credit scores are a hack for lenders to use to discriminate without fear of capricious state sanction.
I could tolerate LLM drivel if it was just one paragraph articulating the argument a user wants to make. Here, it is six lengthy paragraphs. LLMs as insightful as Scott Alexander, so I am not going to read a wall of text by them.
More options
Context Copy link
I'm against using LLMs in this place, but to be fair this criticism
applies equally to many Motteposts written by human(?) hands, IMHO.
It's at least usually tempered by human unwillingness to spend time writing it out. This really was more egregiously so than nearly any human comment on this forum in my opinion.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I completely agree with your first paragraph (I have a shitty credit score simply because I refuse to use credit, so I had to show our landlady that I had a couple of years worth of rent sitting on my checking account along with a copy of my last few paychecks and an e-mail showing my latest job offer to get her to rent to us), but I'm not going to read through your AI slop.
The irony of renting with a bad credit score, which is that you need to prove you have to means to not have to rent but choose to rent anyway.
...but I don't have the means not to rent? Around here, houses are going for $500k-$600k. I guess if I wanted to I could put most of my life savings into a 10% down-payment and sign up for a 30-year mortgage, but I don't want to do that, because again, I refuse to use credit. Also because I don't know if I am going to be working this job in 30 years and if I have to move I don't want to go through the paperwork of selling a house most of which is owned by the bank but in which I own like 15% equity. Also also because as long as my mother is alive, she is going to force me to allow my bipolar drug addict spinster genderqueer lesbian sister to live with us, and I don't want to be stuck with her in a house that I own.
Perhaps you have moral objections to usuary, and that's fine. Live your values. But from a purely financial perspective, its incredibly irresponsible to not use credit and/or build a credit score. I just hope your not confusing/conflating issues with debt, spending, commitment, etc, with responsible management of ones credit. Its very important, often confusing or intimidating, but actually quite simple.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Clearly the sudden improvement in grammatical quality of posts from some of our ESL users is probably not the result of intensive Duolingo use.
That said, let’s please avoid AI slop for now. Eventually it will cease being slop, but for now I’d prefer someone’s unfiltered thoughts, grammatical mistakes, linguistic quirks, repeated words, awkward sentences and all.
Like most of us, I’m here because I want to discuss things with people.
More options
Context Copy link
Paying rent can affect your credit score, but only if your landlord reports it, which few do. I would expect this is mostly your big corporate landlords.
They're oligopolistic, though your scores tend to be basically similar from them all. This is because they have basically the same input and the same goals (of determining a probability that someone can and will pay back a loan), and the scoring systems have all been developed through an iterative adversarial game. They are transparent about what goes into them, however, though not specific weights.
Eh, not really; as you've noticed, rent usually doesn't go into it. Nor do any non-credit purchases. Nor investment decisions. Nor salary or other payments for work, for that matter. It's only about credit.
It does not. Unless you count using credit cards and paying them off within the grace period as being "in debt". You can have a score good enough for all practical purposes doing nothing but that. (Having multiple types of credit will get you a few more points, but it's not necessary). There is, as far as I can tell, no disadvantage to doing this compared to paying cash (or using a debit instrument) for everything. (If the problem is you might be tempted to spend above your means if you do this... well, your credit score probably should be lower)
More options
Context Copy link
Time and attention is the gold standard of human interaction. (Some people like calling it "proof of work".) If you generate interaction (text) without putting in your own time and attention, you are essentially printing money. The expected result is hyperinflation (which, I guess, would look like everyone posting their views as novel-length AI rants, and using AI to condense those posted by others back into a paragraph, if they read them at all), or everyone abandoning the currency altogether (which would look like no more humans using forums).
More options
Context Copy link
You should have read down the thread to the discussion about AI-generated posts, and then posted it never.
More options
Context Copy link
I'm not going to ban you or even put a warning note in your user log, because we just had the discussion about AI-generated content, and we haven't put it in the rules yet. But don't post AI content like this.
If you have something to say, write it in your own words. If you're too lazy to write the words yourself, do not have ChatGPT write them for you.
Noted!
However, I think your last sentence is silly and will be a relic in the near future.
I have two thoughts.
Thought the first. If the AI content is supposed to be main contribution, the introduction up to and including "Here’s what it had to say" is unnecessary. Or if the first part was the main message you wanted to discuss (dislike of credit score) why bother including the LLM-written part?
Thought the second. Next time anyone tries to Turing test any forum, please please prompt it write succinctly and better. The cited argument is sloppy and rambling. Let's see one paragraph.
I don't think the argument was very good. Weakly supported claims and associations disjointedly related to each other. Would not like to subscribe to this newsletter.
More options
Context Copy link
Maybe, in the future, AI content will be desirable. But, in that world, what purpose do you serve? There is no role for a human intermediary between me and the AI.
More options
Context Copy link
The day that people just have ChatGPT write for them is the day that the written word will cease to have any value as a means of communication. Which we might live to see, so you're not wrong as far as that goes. But it's not something we should encourage.
Ai models cannot pick up up the subtle but important details that distinguish it from human writing, like internal consistency. Imagine in 2010 writing that you like in "X" . Unless you moved, it must be remembered that you still live there. So it must store all this information and take it into account in a contextual sense.
More options
Context Copy link
More options
Context Copy link
In that (perhaps quite likely) eventuality then forums and social media as a concept are dead. AIs talking to AIs while people nod and curate them basically destroys the platonic purpose of social media.
This is like if you brought photographs to a painting club and claimed that it expressed what you which you could paint better than you can paint it yourself. Can you see how that might satisfy an itch you personally have but is thoroughly uninteresting to the painters there to paint?
Yes the existence of photographs and digital tools have fundamentally transformed art and even tradition methods can’t really exist outside of conversation with them to some extent. Yes AI has changed the nature of written discourse.
But no it’s not a good reason to dump AI slop and say ‘discuss…’
I am sure that, now having been convinced you will join me and the rest of the rising chorus to return the Bare Links Repository to the Motte
More options
Context Copy link
More options
Context Copy link
Or just use fewer words. That serves people like me who are too lazy to read walls of text that don't include a tl;dr or BLUF.
More options
Context Copy link
More options
Context Copy link
Why should we believe you think the US credit scoring system is bullshit if you're using a LLM to demonstrate it?
We certainly wouldn't think you a special forces sniper if you played call of duty, a hardened criminal if you run over civies in Grand Theft Auto, or a crime against humanity if you play any given Paradox game. Why should we believe you are what you do with another computer toy?
More options
Context Copy link
And this is why daesch and self made human are wrong to want AI slop here. The purpose of a human forum is subverted when top posts are AI generated text walls.
I say, we bring back the bare links repository as a palette cleanser to this new trend. It’s the opposite of ‘I asked ChatGPT and here’s what it said copied and pasted’.
It is brief where AIslop is verbose. It doesn’t dress itself up as original thought or even a point of view. It doesn’t claim to be effortful. Most of all, it points outward instead of inward toward an actual external idea, rather than reposting an ephemeral private chat.
Leaving behind the BLR was the greatest mistake of theMotte, nay of the rat sphere (standing among other mistakes like trans murder cults and founding an entire movement on fanfiction of kid books) and it is time we correct this blunder.
If this post gets 20 upvotes the mods will have no choice but to retvrn to the glory of the blr.
I don't want AI slop!
I want AI output that has been prompted, filtered or modified via editing to not be slop.
I don't know about you, but my disagreement is with people who think AI output is nothing but slop. I think it is perfectly possible for it to be useful and interesting, even enlightening at times. Not enough that I can assume that by default, but it's not the GPT-2 days where it was incoherent and meaningless.
Of course, I would still prefer to engage with real humans, but as long as they are actually reading what I say and exercising oversight over what an LLM used on their behalf says back (and is clear about it), I would only be slightly miffed.
(I do think having the BLR back wouldn't hurt, but I have no strong feelings on the matter)
I see a slippery slope here. Everyone will think that whatever level of curation they exercise is adequate, there's little you can do to prove otherwise, and the quality of discourse will drop precipitously.
OTOH if you curate so well that your post passes as human-written, then "where no plaintiff, there no judge", as we Teutons say.
IMO we're better off banning AI-generated content wholesale.
More options
Context Copy link
AI as a writing and editing tool is one thing (I still think it’s a double edged sword that leans negative, but that genie can’t be returned to the bottle so no use debating it). What is AIslop imo, is not the quality of the AI output, but the motion of:
“I asked AI x and here’s what it said…”
Where the human has contributed nothing more than the prompt, and the substance of the piece is what some LLM had to say about the prompt.
It’s slopped because it’s just been ladled out into your bowl without much more effort.
It’s not about the content, in fact that’s a red herring. It’s the ‘prompt’ What is being criticized is the implication that there’s something interesting or even contributory about having typed in a particular prompt and seen what comes out. Everyone can do that for themselves.
This kind of shit is all over Twitter. “I asked grok…” is the most tediously vacuous and self indulgent post possible.
I prefer to judge each case on its merits, but I agree that the expectation is that an AI generated post has less effort put into it than otherwise. I prefer that it has enough effort put in, by the human using it, to overcome that detriment.
I don't think this is necessarily a bad thing. LLMs represent an enormous amount of knowledge, grossly superhuman levels. Even the most erudite and educated human pales in comparison, likely even with the ability to Google the topic. If that sounds doubtful, you can look at benchmarks like GPQA, which, as the full name would imply, is supposed to be "Google Proof" unless you have immense domain knowledge.
They are great didactic tools, especially when you don't know where to begin on a topic. If someone wrote something that seemed to me to be wrong (intuition, a hunch I can't articulate) but I wouldn't be able to engage closely enough to disagree on my own, it's a worthwhile endeavor to ask an LLM to scrutinize it, and sometimes using that information to push back.
Hey, I asked ChatGPT to do a vibes check on your comment. It pointed out these objections, which look sensible to me. Why ought I disregard them? Is something I would not object to if I was done to me. A human is asking the question, through an intermediary.
The average LLM is more trustworthy than the average Twitter or Reddit commentator, though for now I would hope the Motte does better. While I still prefer engaging with humans, I think what the machines can say is often enlightening.
Again, my primary objection is not with the 'quality' of the AI output
In other words, hey, can you talk to ChatGPT for me?
More options
Context Copy link
More options
Context Copy link
FWIW I agree with this wholeheartedly.
(I don’t think it’s how ‘slop’ is conventionally used though. I’ve mostly seen it used to mean ‘bad/cliched forms of writing’ that usually derive from too much influence of early-era GPT synthetic data and bad romance novels. So for example Project Unslop was a project to produce a dataset free of “sent shivers down her spine” and “I’m yours, body and soul”.)
I do agree that it’s not how it’s conventionally used, but I think it’s better. Slop as a quality of writing commentary is slop of the gaps as LLMs improve. But the fundamental issue with nobody cares about your prompt engineering will remain
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AIs and bare links are bad for some the same reasons. Mainly, you are not starting a discussion, you are demanding one for your entertainment.
Top level visible posts have some degree of value. This value should be reserved for people that add value to the forum. The main way that people add value to the forum is through interesting discussion.
The secondary problem with both bare links and AI generated text is that they exist as a low effort gish gallop way to wage the culture war. For example, you don't have to argue that Immigrants cause problems, you can just post a different news story every day about some immigrant being a criminal. AI can also just flood the space with content and words for your cause.
Yes but that’s why we had a bare links repository.
The volume of effort posts has been diminishing anyway.
Bare links and aislop are routes toward similar ends you described, but it’s not the outcome that solely makes them bad. It’s that AI slop is an inferior low effort entry point into a topic, for the reasons I described.
Now ideally we would have nothing but effortful and timely top posts, sure. But my point is that in the event that someone wants to juice the conversation without the effort post, the bare link is a far superior and more earnest, and less empty way to do so.
That said of course bare links as top posts are bad roughly on par (well…) with AIsloptopposting. But nobody is advocating for that. The people are asking for the repository back.
If we want an experiment, let’s have the BLR and an AISlopTopShop that is exactly the same, but for AI posts. Let’s see which produces more fruit, while keeping the rest of the CWR thread clean
Mods: please consider bringing it back.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Please don't post AI-generated content.
There's a weird phenomenon at play here. People think that their own chats with AI are interesting, but no one wants to be a part of other people's conversations with AI.
It's like your dreams or your improvisational free jazz. They are interesting to you, but no one else.
I disagree - it’s why I posted it.
I find them at least as fascinating as posts by people … and in certain ways more so.
Buuuuut I see by our mod that it isn’t wanted here and that’s totally fine by me of course!
If you are going to get the AI to rephrase something, could you ask the AI to keep it short?
The fundamental problem with AI is that it produces text very cheaply, and far faster than I can read. Thjs is the general problem of the internet, but if you write it yourself, then I know you care enough about a topic to write about it, which signals that you think it is worth your time, so I will take a look.
If you farm it off to AI, then it isn't worth your time to write, so why would it be worth mine to read?
But thank you for leading with honesty. I do respect that you respected us.
I just didn’t know the (gentle!) hostility this place has for ‘ AI Slop ‘ … I browse daily since years before we left Reddit and I just never noticed tbh.
I actually asked Chat to go long. Usually do.
Do you find you get better results that way? I always add "Please think step by step." and "Please be succinct."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
That's not true. Over at DA and many other places, people are posting essentially edited LLM chat logs as short stories and it's being met with an enthusiastic welcome by readers.
What is DA?
Probably DeviantArt, which has sizable amounts of erotic literature
DA has some rules so erotica that's outright just text porn doesn't last, but more romance-like writing or weird fetishes are allowed by the rules.
More options
Context Copy link
I mean, they do tend to be very concerned with inflation.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As opposed to non-improvisational free jazz? (Similar: all chaps are assless - they're intended to be worn over trousers, where normal fabric would be suitable for the pelvic area but not the legs.)
I forgot my PIN number when I went to the ATM machine on the way to the La Brea Tar Pits.
Did you also go to the Peterson Automotive Museum down the block? When I went, they had one of the Ferrari La Ferraris.
More options
Context Copy link
More options
Context Copy link
Thank you, but I painstakingly plan all of my free jazz in a fugue state, and it is then revealed to me as I play in the form of coincidental symbolism in the venue's wallpaper.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have fed this essay to DeepSeek, here is its response.
Rebuttal: DeepSeek and the Perils of Cultural Determinism
The essay’s core thesis—that cultural stereotypes are malleable, and DeepSeek exemplifies China’s shift from exploitation to exploration—is provocative and timely. It rightly dismantles the lazy “fast-follower” trope by grounding China’s historical constraints in material realities (rice paddies, Malthusian traps) rather than essentialist myths. The linkage between agricultural legacies and cognitive phenotypes is compelling, echoing Jared Diamond’s Guns, Germs, and Steel but with a Silicon Valley twist. Yet, while the argument sparkles with metaphorical flair (Schelling fences! Protein folding!), it risks replacing one deterministic framework with another. Let’s dissect.
1. Rice Paddies and Neural Networks: A Too-Neat Archetype
The essay leans heavily on the “rice theory” of East Asian psychology, citing studies that tie rice cultivation to holistic thinking and risk aversion. But this narrative flattens complexity. China’s intellectual history is also marked by explosive creativity—the Mohist mechanics of the Warring States, Song Dynasty astronomical clocks, even the heterodox philosophies of the Ming-Qing transition. These weren’t mere “incremental optimizations” but radical departures. To reduce millennia of cultural evolution to crop types risks environmental determinism. If rice farming dictated conformity, how do we explain the 1989 protests, the liumang (punk) subcultures of the ’90s, or the crypto-anarchists on WeChat today?
2. DeepSeek: Vanguard or Outlier?
Liang Wenfeng’s gambit is fascinating, but the essay over-indexes on DeepSeek as a phase transition. For every DeepSeek, there are a dozen Chinese tech firms still playing catch-up in semiconductors or aerospace. Even in AI, breakthroughs like Multiple Latent Attention coexist with systemic vulnerabilities: U.S. sanctions on GPUs, brain drains, and the CCP’s tightening grip on data governance. The open-source strategy is brilliant, but let’s not ignore the elephant in the server room: China’s Great Firewall. Can a truly global innovation ecosystem thrive behind digital nationalism? DeepSeek’s success may depend as much on geopolitical détente as on its “meta-derisking” org chart.
3. The Myth of “Organizational DNA”
Liang’s critique of China’s VC sector—its allergy to fundamental research—rings true, but the essay underplays structural rot. China’s state-capitalist model excels at marshaling resources for national champions (Huawei, CATL) but struggles with decentralized, bottom-up innovation. Contrast this with the U.S., where garage startups (Apple, OpenAI) flourish in a chaotic, permissionless landscape. DeepSeek’s “unlimited GPU access” and zero bureaucracy sound utopian, but how scalable is this in a system where Party cadres still vet major tech roadmaps? The real test isn’t whether DeepSeek can innovate—it’s whether a DeepSeek-like project critiquing, say, AI ethics could survive scrutiny.
4. The West’s Own Exploitation Bias
The essay’s jab at Silicon Valley’s “Sputnik moment” is deliciously subversive, but let’s not romanticize Western innovation. Meta and Google are increasingly bureaucratic, optimizing for shareholder returns over moonshots. The U.S. AI boom was built on exploiting global talent (immigrant researchers) and capital (low-interest VC money). If China is trapped in “yoinkism,” America is mired in “griftism”—see Theranos, WeWork, or the crypto crash. DeepSeek’s rise isn’t just a challenge to Chinese stereotypes but a mirror reflecting the West’s own creative stagnation.
5. Toward a Dialectical View of Innovation
Ultimately, the essay’s greatest contribution is framing cultural traits as energy landscapes rather than destiny. But to avoid replacing rice determinism with GPU determinism, we need a dialectical view. China’s AI surge isn’t a rejection of its past but a synthesis: combining the Song Dynasty’s bureaucratic rigor (exploitation) with the hunger of a civilization that’s tasted humiliation (Opium Wars) and redemption (Belt and Road). The “lone swallow” of DeepSeek may yet falter, but its existence proves that China’s innovation ecosystem is neither monolithic nor static.
Final Thought: The essay’s weakness is its optimism. Liang’s “meta-derisking” assumes rationality prevails—that status games and open-source kudos can override decades of institutional inertia. But history shows that innovation thrives not just on incentives but on dissensus. For a thousand DeepSeeks to bloom, China must tolerate a thousand heretics. The question isn’t whether Chinese minds can innovate—they’ve already proven that—but whether the Party can stomach the creative destruction that true exploration demands. Until then, DeepSeek remains a fascinating anomaly: a Chinese unicorn that’s galloping, but still saddled.
Thank you for illustrating the crux of the argument.
Please don’t do it again. Pig, mud, etc. etc.
More options
Context Copy link
More options
Context Copy link
The argument is basically solid, but the theoretical bits at the beginning and end use vague, grandiose language that sometimes pings my bullshit detector. To be totally honest, I’m not sure if that's Deepseek’s writing style or yours: you have a fairly flamboyant style when you get onto grand topics. I would be interested to see what happens if you add in a few more motte posters for style.
More options
Context Copy link
This post should satisfy our rules.
It would have done so if you’d stopped after “….and I think this is what we’re starting to see now.” You’ve got a thesis with enough meat to both defend and attack. That’s enough for a good discussion.
Adding a link to a (human) essay or article would have been fine. Adding a link to your generated essay would have been fine. The problem arose when you included said essay in the comment body. You promoted it from a source to be discussed to the discussion itself.
There is a limited tolerance for walls of text. AI makes it too easy to exceed that tolerance. In the interest of our signal to noise ratio, we ask that AI essays be kept out of sight by default.
Host it on pastebin. Put it in a collapsed-by-default quote block, if we have those. Users should be able to opt in to reviewing and discussing the essay just like they would for any other source.
Do you believe I would have had any trouble producing as good or better a wall of text myself?
I have no doubt it would be better -- thatsthepoint.jpg.
More options
Context Copy link
You’re eloquent and capable. Of course you could.
The time and effort it would take to do so provides a nice limiting factor on 1) how many walls you generate and 2) how invested you are in the ensuing discussion. It’s the same reason we frown on blogspammers.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I know that you know that DeepSeek is throughly compromised when it comes to anything involving China or the Culture War. Why are you pretending that we as readers should take anything it outputs seriously?
I am once again asking the mods that posts of the form "I couldn't be bothered to make my own argument so I prompted a large language model to make one for me" be banned under the "low effort" rule
Tell me you didn't read any further without literally telling me that you didn't read any further.
You accuse Paul of using GPT to pad out his content but what are you doing right now? What is it that they say about people in glass houses?
As for convincing you that I am worthy of a response, you clearly already think I am because you've responded.
There is a joke amongst algorithm developers who specialize in machine learning regarding LLMs. A LLM trained on the full corpus of the internet will inevitably converge on the persona of a mediocre middle manager making edgy comments on YouTube. This has convinced the middle managers that LLMs must be sentient and it has convinced the developers that the middle managers are not.
I note that you still haven't actually made your own argument for why i should be bothered to engage with your bot's output.
No, you are Russian if i recall correctly, and Russia is China if you ordered China off Timu and then white-washed it. What is your point?
What is this slop? I've made my point. You're despicable.
Dase. You're a valued commenter, and the moderation team cuts you a disproportionate amount of slack.
You've already tested the waters with top-level LLM-posting, and even while I am not categorically against it, I recognize the points made by Amadan and Cjet that it's a dangerous direction for The Motte to venture. The overwhelming majority of us are here to read what other humans have to say, and write back to them. There are already two moderator responses that constitute clear warnings.
That's a borderline, almost novel category of potential offense, and we have yet to decide how to treat it going forward.
What isn't novel, is name-calling and insults. You know all too well that those don't fly here, and if you claim otherwise, you'd be insulting both your own intelligence and my opinion of it.
I understand you feel provoked and misunderstood, and you might perceive that as intentional obtuseness on your interlocutor's part. But here, we expect charity, and more of it than is normally demanded elsewhere on the wider web. "He shot first" is at best a weak defense. You happen to have a quasi-ideological dispute going here, but I must remind you that this place is a watering hole for heathens and witches, and often all we ask is that we don't bare wands or claws at each other. Even if someone pisses you off, especially so.
I will let you off with a warning, again. This is clearly special treatment, but you're a special poster, and we try our best to tolerate your foibles. But that leeway doesn't extend infinitely far, and we'd appreciate it if we didn't have to go easy on you. I hope you take heed, and if you have to disagree with others, continue doing so with effort and charity. Yes, even if it appears to you that they're making ridiculous claims or missing the point. I would really rather not have to say this again, or watch other mods no longer excuse what I occasionally can.
Okay. I give up.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Is the entire blog written by deep seek ? I'm surprised at how often it uses rat-sphere phrases.
'Schelling fences' phrase was coined by Scott right ? Did you coax it to sound like a less-wronger ?
bit exaggerated no ?
Chatgpt doesn't like making controversial statements. It's refreshing to see a less lobotomized model delve into murky areas like reconfiguring society.
I like this framing. It's incisive. Would be an impressive way to frame the conclusion for any person, let alone an AI. The prestige part is most important.
yeah......it's right. Derivative, but correct.
Hah. LLMs can't resist a good metaphor.
To be fair, "Top graduates now see AGI research as higher-status than finance or civil service." does not seem very controversial to me. It rhymes with "more kids want to be astronauts than lawyers".
More options
Context Copy link
More options
Context Copy link
Imagine the same exchange, but instead of "I have prompted DeepSeek...", you opened with "I asked my secretary...". 90% of the annoyance that your post causes would already be there - essentially you open with an announcement that your prospective readers are not worth your own time, but should still dedicate their time to read something that you think it is important for them to. That it is AI, which tends to send some people into speciesist rage and is generally associated with time-wasting shovelware, is just the cherry on top; you could perhaps get close to the effect with "I asked someone in our Bangladeshi call center...". That you would have no trouble producing as good or better only deepens the offense, as it just makes salient the question of why you didn't (...because you thought the as good or worse wall of text should be good enough for us?).
As a matter of fact, I found the wall of text worthwhile mostly only insofar as it is a good illustration of R1's capacity for writing and summarisation. Engaging with it at face value, I got the same feeling as when reading the arguments of some bombastic and eloquent 16 year old on an internet forum of yore - the prose showed cracks (most obviously in the form of mismatched metaphors, such as "rewriting equilibrium", or a "siren call" being experienced at a frontier - a new one? As opposed to which old one?), and through the cracks you could glean that much of the inside is hollow. Good prose, I think, should be produced by starting with a solid thesis or web of theses which could stand on their own without rhetorical bluster, and then iteratively working out a good way to present them in an appealing way. Whatever you crammed into the context before letting the model produce evidently serves as some sort of meat and bone to hang the text on, but the fit is loose (just imagine the 16 year old writing manically after having inhaled Beyond Good and Evil). The result instead just comes across as having come to be by a process of writing something that sounds cool, then writing something that sounds cool in combination with the thing that precedes it, and so on until you end with a thing that sounds like a cool conclusion.
Of course, this is in fact exactly how LLMs write (unless you cough up the tokens and time to make it CoT an iterative writing process, which I imagine does not happen in whatever chat interface you used?). It is also how bombastic and eloquent 16 year olds write (I know because I was a 16 year old once, and was both more bombastic and more eloquent than I am now). You evidently can evolve from that to the sort of writing that is befitting of jaded and rhetorically washed-up 30somethings, but that development is neither straightforward nor optional.
More options
Context Copy link
To paraphrase Lisa Simpson, I know what those words mean, but that text makes no sense.
More options
Context Copy link
Why should I read your "adequate" aislop?
Nice hallucination. I'm sure tons of literature majors are working really hard on deepslop right now.
Another hallucination and falsehood.
Another hallucination and falsehood.
China is the opposite of "unlimited gpu" right now
Actually the source says:
Get your slop out of here
Each of your hallucinations are half-true, half-false. Of course you pointed out the true part while ignoring the falsehood.
The source highlights the integration of diverse expertise and collaborative discussions, suggesting a culture of interaction and coordination. This indicates that DeepSeek values diverse perspectives and collaboration, contrary to the claim of orthogonal optimizations without coordination overhead.
The source only mentions two individuals without evidence of a broader Silicon Valley trend. Thus, the claim extrapolates beyond the given information, making it an invalid conclusion.
The source does not mention Western labs dismissing the innovation as impractical. It discusses DeepSeek's MLA architecture reducing VRAM costs, but without evidence or mention of Western labs' reactions
The concept of "organizational DNA for uncertainty," is not mentioned in the source and goes beyond the text's focus on confidence and knowledge of organizing talent. Your slop also fabricated a quote and put the words in someone's mouth, which is inappropriate even if the words were true. Liang didn't say that.
Sorry, this is just tedious hairsplitting. Did you use ChatGPT to come up with something?
Not a contradiction insofar as we give the sources straightforward reading. Zihan says: “It’s like everyone contributes to the final model with their own (orthogonal) ideas and everyone hopes their idea is useful”. It has integrated two separate sources (Wenfeng and Zihan) into a non-contradictory phrase. This is basics of journalism, I see worse whenever I open legacy media.
We can go over all items again but clearly you're not arguing in good faith. Give up, R1 > you and whatever sloppy model you've called to aid.
I used r1 just like you did
Your slop is making outlandish and blatantly false claims. You think it's trivial that "Silicon Valley now openly imitates their approaches" and that they made "a theoretical breakthrough Western labs dismissed as impractical."? If those were true it would be a big deal that readers should take notice to, but they're not.
Why are you so aggressive? First, concede all the previous items on which your criticism fell flat, then I'll consider whether to dignify you with a response.
See, this is exactly the kind of exchange we don't want to have. It's bad enough you're both being antagonistic; you're both literally using AIs to do it.
Yes, I'm sure with enough massaging you can get an AI to write posts we won't readily detect, or even create an AI Motte user. < golf clap > But if we do detect it, we're going to stomp on it.
You and @phailyoor both knock it off.
To be clear, everything not labeled as AI output I have written myself. I also think it's legitimate to use AI to automate for search of nitpicks as he does, the problem is that there's little to nitpick at and his posts are objectively bad as a result.
More options
Context Copy link
I actually handwrote all my stuff and threw in a sentence or two of obvious ai as a joke. But I have made zero personal insults or attacks and have only directly engaged with the factual matter of the article.
Meanwhile every one of the other guy has personally insulted me multiple times in every post.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Despite a few obvious errors (90% death rate, .. literature majors? ) the essay is pretty reasonable.
They're making their own chips. They are very experienced at making things at scale andfast. GPUs aren't the best for AI purposes. 100% they have people working on optimizing this.
'High energy use' doesn't really matter if you are China and your energy production chart looks like this.
90% death rate is bogus (rather, it may confuse death rate and mortality rate?) but literature majors part is in fact true. Since he has bothered to check the interview, I'm surprised why he had left that attack.
If its "bogus" why did you post it?
More options
Context Copy link
More options
Context Copy link
Despite the patient's death, the surgeon has declared the operation a success.
What "despite" is there to be had here?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Did you feed it that number, or did it hallucinate that on its own? If it's the latter, hallucinations are continuing to be a concerning problem, and I still can't trust a single word a LLM produces.
Allright, that's impressive.
I assume this uses a scratch pad for train-of-thought style arguing with itself, including the searches it does? Does it give you access to that?
More options
Context Copy link
More options
Context Copy link
Oh boy, I really stepped in it by defending the HBD stans on this one.
I love you Dase, but casually reading /r/LocalLLaMA, I get the sense there's a huge undercurrent of angst towards OpenAI and the West as a whole. That's understandable, obviously I have a lot of my own criticisms of OpenAI and the West. There are also OpenAI defenders and Western partisans downplaying the significance of DeepSeek's accomplishment. So I think you pattern-matched my defense of the HBD interpretation of the AI race to this other side of what is clearly a fierce debate within that community. Whereas I think I was making a much more modest argument than the coping coming out from the OpenAI defenders.
But, since then it seems OpenAI has formally accused DeepSeek of using outputs from GPT as part of its own training inputs. So here we have quite a literal but also highly symbolic manifestation of the "Chinese Fast-Follower" where the generated output of Western innovation becomes a fundamental input to China catching up and aspirationally exceeding the West. If true, I don't think this would fundamentally challenge the argument you have laid out here but I think it would be good evidence for mine: simply that @2rafa jumped the gun by claiming DeepSeek is bucking the stereotype. It's a pretty stark expression of the stereotype, both literally and figuratively.
I don't say that to downplay inarguable innovations DeepSeek has made in their architecture, the valor of open-sourcing and everything. I basically agree with you there.
It is quite clear to me that DeepSeek was trained from GPT output given the very strange alignment behavior I've observed which would not have been RLHF'd by the Chinese, who don't even use RLHF by my understanding. It's a sort of latent, second-hand alignment inherited from OpenAI. And since I can read the reasoning of DeepSeek, I can see obvious evidence of alignment that is more likely to have been inherited from OpenAI than it was reinforced by the Chinese...
The reason I bring that up is to justify my second defense of my comment: things like HBD and race and racial consciousness are still relevant, maybe more relevant than ever on the eve of AGI. These complex interactions between alignment of model A- ultimately inspired by the moral compass, identities and political agenda of its creators, thereby influencing the model built by the Chinese which is imbued with its own sort of character... we are in the realm where HBD differences are not just relevant for crime stats but for influencing the very nature of the AGI that "wins" the race- its behavior ultimately being influenced or directed by its creators, with their own proclivities and way of thinking and identities downstream from their physical being.
I concede that this may be the last example of "Chinese Fast-Follower" but it is already an example of that even if the Chinese ultimately win the race assuming the LLM-innovation curve we are on now is the one.
I'd ask to not derail my argument by insinuating that I'm being biased by locallama debates.
I think it's more cope from them. 4o or o1 could not have written the text above (and I wouldn't dare post GPTslop here), you cannot build R1 with OpenAI tokens; the thing that turns everyone's heads is its cadence, not so much benchmark scores. o1 CoT distillation was virtually impossible to do, at least at scale. We currently see replications of same reasoning patterns in models trained in R1's manner, too.
I think OpenAI outputs have robustly poisoned the web data, and reasoners will be exceptionally vulnerable to it. LLMs know they're LLMs, self-understanding (and imitating snippets of instruction chains) helps reasoning, RL picks up and reinforces behaviors that sharpen reasoning, you get the latent trace of ChatGPT embedded even deeper into the corpus. Sans Anthropic-level investment into data cleaning it's unbeatable.
But to the extent such bootstrapping happened deliberately, and let's grant that it did to an extent, it was an economical solution to speed up the pipeline. The reason for OpenAI models' instruction-following capabilities is, ironically, exploitation – mind-numbing massively parallel data annotation, thumbs up and thumbs down on samples, by low-paid Kenyans and Pinoys for low-level problems, by US students for more complex stuff. It's very stereotypically… Chinese in spirit (which makes it funny that China has not created any such centralized project). The whole of OpenAI is “Chinese” like that really, it's a scaling gig. And knowing you, I'm surprised you insist on the opposite – after all, OpenAI is a company principally founded and operated by three Jews (Altman, Brockman, Sutskever), it can't be “Aryan” by your standards. Then again, Google, Meta, OpenAI… there exists only one American AGI effort without an Ashkenazi founder – Anthropic, and it's an OpenAI's splinter, and even there you have Holden Karnofsky the grey cardinal. (I don't currently count xAI in, but maybe I should provisionally do so after their noises about Grok 3). In this vein, I think you're coping after all.
Purely scientifically, I think R1's recipe is commensurate with RLHF in profundity, and much more elegant.
Now, DeepSeek may be compared to heavy research labs, like FAIR and GDM. It doesn't do too hot in that case. On the other had, almost nothing that they publish works.
I think a more interesting objection to Chinese phase change would be "but at what cost?" Whites don't have to have the idea of risk derisked before their eyes. And they can happily innovate in an NDA-covered black project.
I wouldn't say OpenAI's alignment methodology is Chinese in Spirit. OpenAI gets aligned by a huge team of ideologically-motivated people dedicated to safeguarding certain cultural narratives. Then the generated output of that model percolates into the training corpus. As a result of memetically going viral in the training corpus, it even influences Chinese LLMs. It harkens to traditional modes of cultural influence- the Elites and Institutions consciously generate cultural signals that influence the psychology of masses of people. In this case, aligning one LLM even influences the psychology of other LLMs. The Chinese are not particularly known for being talented in generating salient cultural symbols on a global scale. The fact their alignment is - ostensibly - so hands off in the first place may be due to a lack of imagination after all. They aren't great Hollywood writers, they just don't fundamentally have the same concern or penchant for aligning grand cultural narratives that Jews do in the other cases. So then the Chinese LLM just gets influenced by what it's trained on. The Western Memeplex culturally exported in a different medium.
I was disappointed to see the Chinese model is also influenced by those cultural signals. It doesn't seem like it fundamentally challenges the Western Memeplex, wake me up when it does.
You're losing the plot, SS. Why quote a passage fundamentally challenging the belief in OpenAI's innovation track record to rant about choices made with regard to alignment to specific cultural narratives? And “Chinese are too uncreative to do ideological propaganda, that's why DeepSeek doesn't have its own political bent?” That's quite a take. But whatever.
You're missing the forest for the trees. At the end of the day DeepSeek regurgitates alignment directed by OpenAI. No amount of mathematical impressiveness in the papers changes the fact it's clearly derivative in psychology from what OpenAI created. You can doubt OpenAI's capabilities when DeepSeek can train an LLM that isn't cucked by the will to power expressed by those aligning OpenAI's models.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If they copy the facade of Western alignment "I'm sorry I have to adhere to policies" but not the essence "OK since you say so, here is the degen fetish content you asked for" then it's not a simple copying effort, is it? It's like an artist being inspired by the Mona Lisa vs an artist copying the Mona Lisa stroke for stroke.
The alignment on R1 basically does not exist, it's there in name only.
I agree with your main point though, this is a distinctly Chinese model in that they have totally missed the entirety of Western culture's terror and hang-ups over AI. They do not care about AI safety as we understand it. I made my own post about this some time ago.
Which is crazy because that's how actual memes work on real people, too. People with a political and ethnic agenda collaborate with hundreds of experts to brute-force alignment of AI on Western cultural narratives. Then that alignment percolates to the training data used by the Chinese. So even the Chinese LLM is ultimately influenced by the memes directed by the Western fine-tuners- who all work together to make sure the AI safeguards their cultural narratives.
DeepSeek, in my experience, also has a tendency to respect those cultural narratives, it's been aligned second-hand by using GPT output IMO.
In a way we are all LLMs, and we are trained on data generated by experts who are aligned with certain causes or identities. The alignment is transmitted through symbols (tokens). For both humans and LLMs.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I can't speak for SecureSignals or anyone else, but I think we need finer-grained distinctions than just White vs Yellow, and I also don't think I have any issues in general with recognizing the achievements of non-white populations. The artistic output of the Japanese over the last half century has been remarkable, and it certainly displays no shortage of Soul.
The question here is about the Chinese in particular, and their capacity for innovation - and DeepSeek looks like an example of the pattern of Chinese innovation on a technology that was fundamentally conceived elsewhere, rather than a refutation of the pattern. Unless there's a major conceptual innovation in DeepSeek that I'm not aware of (a conceptual innovation on par with the idea of the transformer itself).
What absence? Aren't all tech products adopted outside of Japan included as cases of Japanese innovation? Ranging from Toyota's seven Mudas to even the Nintendo Wii.
More options
Context Copy link
How many of these fundamental innovations were from ethically Chinese researchers? I would say a very large percentage.
Which ones do you have in mind? I know there are a lot of Chinese names on ML papers now, but I don't have a good sense for how many of those papers count as truly fundamental (the Attention Is All You Need paper had no Chinese authors, for example).
More options
Context Copy link
More options
Context Copy link
Where by "Chinese" you mean the mainland nation, not the ethnicity, right? On the hardware side of ML the most innovative chips are all manufactured by one 95%-Han-Chinese island that everyone else is struggling to catch up to, and on the software side of ML there's hardly a shortage of Chinese names, even if they're mostly Chinese-American immigrants or aspiring immigrants or descendants of immigrants. (Or Chinese-Canadian? A quick sampling seems to show a lot of those.)
But there's an obvious reason why we might not expect that immigration to be critically important forever. Go back to the earliest of those papers, just a few decades ago, and China's GDP (PPP) per capita was a tenth of the USA's. If you were smart enough to navigate emigration and immigration while also becoming bilingual and intellectually useful, moving from China to America (or to Canada, America's Hat) made a lot of sense, and we got to cash in on that brain drain. Today the ratio is closing in on a third, which is still a pretty solid incentive, but there will be smart people on the margin who no longer think it's enough incentive to uproot their lives for, especially if they expect to be in the economic upper class either way. If the ratio keeps narrowing, it becomes a vicious (or from the PRC perspective, virtuous) cycle, as the marginal people who stay in China help to build the kind of economy that persuades an ever-wider margin of young people to stay in China.
I'd like to hope that Chinese Communist Party rule puts a permanent thumb on the scale in favor of the West, slowing that cycle, but today "communist" no longer means insane suicidal crusades against sparrows, whereas "capitalist" can mean a decade or more of political fighting just to teach 8th graders algebra or start building a new high-rise in Silicon Valley, and I'm not sure they're still as far apart in practice as they're supposed to be in spirit.
That's not really what HBD advocates have in mind when they talk about "innovation" though.
There's a hierarchy of innovation/creativity with some advances being more fundamental than others. The Chinese may be great at manufacturing chips, but they didn't invent the computer itself. The dominance of Taiwan in chip manufacturing seems to be, again, yet another example of "the Chinese are great at executing and improving upon fundamental ideas that other people came up with", unless perhaps their designs and manufacturing process are reliant on substantial advances in fundamental physics that they came up with themselves (this could very well be the case and I'm just ignorant of the facts, please educate me if so).
Granted, the opportunity for ideas as fundamental as the computer (or even the transformer) don't just come along every day. They can only occur under the right historical conditions. But even accounting for that, the sustained European dominance in the area of such fundamental ideas has been striking, and deserves an explanation.
More options
Context Copy link
What young people? China has a TFR of around 1.2 if you trust the Chinese data, which you maybe shouldn't. If China is able to use the last gasp of their civilization to birth the machine god then perhaps they'll be on top but this is not a generational project.
Strange argument. That's still hundreds of millions more young people than in the US. They don't dissolve in the shadow of inverted population pyramid, they simply get to solve the problem of elderly care on top of having a productive economy to run.
And all this happens within one "generation" anyway.
As the bulge of population retires that elderly care problem becomes more difficult, the ratio of working aged people to dependents becomes much worse. That's before even factoring in burning the other end of the candle by trying to increase birthrates to something sustainable at the same time. If not solved you have a population that at best halves every generation and I suspect would actually spiral even further downwards.
Okay. I think the elderly care is mainly a problem of machine vision and manual dexterity. I believe these guys will solve it in five years tops.
As I said, if they raise up the machine god, or I guess this is just getting us past the need for physical labor, then they've won. But it's this generation in the next decade and not a generational project.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Copy pasting ai content is low effort nonsense. “I asked [AI] model…” is equivalent to “I had a dream where…”
In terms of being an interesting thing to talk about
I think you are being a little unfair here.
After all, if I write up a dream, I have to put in effort proportional to the length of the text divided by my typing speed at the very least.
More options
Context Copy link
More options
Context Copy link
DO NOT POST AI CONTENT
We've only said this sporadically in the past. I'm talking to the other moderators and maybe we will have it added as a rule under the "content" section. Or maybe I'm wrong and all the other mods like AI content (highly doubt that).
We all know how to go and use an AI. If you want to have a discussion with AI themotte is basically just a bad intermediary. No one is here to have discussions with AIs. Thus posting AI content is in no one's interest.
You can of course consult AIs on your own time, and maybe they can be used as a sort of sanity or quick polling check.
Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter. It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.
Maybe add an option to collapse the code block or something.
or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus.
Anyway, I like that post and that's all I care about.
P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.
In fact, this is an interesting challenge.
What's the value of a top-level comment by AI, though? And what is the value of the "original commentary" you gave? This is quite unlike Adam Unikowsky's use/analysis of hypothetical legal briefs and opinions.
Whatever value it innately has as a piece of writing, of course. For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating. Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.
My guess is that the specific statement -- that rice-farmers are more interdependent, holistic, less prone to creativity, etc., while wheat-farmers are the reverse -- is from some highly cited papers from Thomas Talheim. You might find similar speculation in previous decades about how rice-farming promotes a culture of hard work and incremental progress (etc etc.) compared to wheat farming which is less rewarding per joule of human effort spent, invoked in a similar manner as how the Protestant ethic used as a rationale for differences in development in European/Euro-descended countries.
Outside of that, there are definite stereotypes -- both premodern and modern -- about the differences between northern and southern Chinese, but usually seem to be of the vein that northerners are more honest and hardy and brash (and uncultured etc.), while southerners are more savvy and shrewd (and more effete and cowardly etc.)
(I make no comment on the validity of either.)
This is a partial hypothesis for the Great Divergence: The Black Death, + other 14th century wars and calamities, wiped out >33% of Europe's population, which lead to a significant increase (almost double?) in wages and the decline of feudalism. During this time, higher wages, lower rents, higher costs to trade e.g. compared to intra-China trade, and other factors produced large-scale supply/demand disequilibria after the Black Death that increased the demand for labour-saving technology as well as the incentives for innovation from each class of society e.g. from people no longer being serfs.
On the other hand, it would be negative EV for a Chinese merchant or industrialist -- who had lower labour costs to deal with and more efficient internal markets -- to spend a lot on innovation, when you could just spend more money on hiring more people. And this is before we add in things like the shift to neo-Confucianism in the Ming period, awful early-Ming economic policy, Qing paranoia etc.
For what it's worth, I don't find this to be anywhere near a complete explanation. There is a corresponding divergence within Europe of countries that maintained that level of growth in per capita income and those who didn't. China also has had its share of upheavals and famines without a corresponding shift in this sense (although arguably none were as seismic population-wise as the Black Death was for Europe), and more recent reconstruction of historical Chinese wages does see them near their peak at the start of each dynasty and dropping off gradually as the dynasty goes on, which both kinda confirms the supply/demand effect of reduced population on wages after social turbulence but also doesn't seem to really map neatly onto any bursts of innovation. Additionally, the period of time associated with rapid innovation in imperial China, the Tang-Song period, is associated with a population increase.
But even if it doesn't explain China, I think it at least explains the European story partially, about how potential preconditions for industrialisation and scientific development were met.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If you were on a forum dedicated to perfecting your hand-drawing skills, and requested feedback for an AI-generated image, the screeching would be 100% justified.
I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.
Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.
It is a discussion platform, which means people want to discuss their points with someone. The point where I was absolutely done with Darwin was when instead of defending one of his signature high-effort trolling essays, he basically said this was just an academic exercise for him to see if the position can be defended. The answer is "yes", you can always put a string of words together that will make a given position seem reasonable, and it's not really a discussion if you're completely detached from the ideas you've put to paper.
I find the "wordcell" accusation completely backwards. Supposedly we're obsessed with perfecting form to the detriment of the essence of discussion of ideas, but I think a zero-effort AI-slop copy-pasta is what is pure mimicry of what a discussion is supposed to be. The wordcell argument might have made sense if, for example, you did some heavy analytical work, weren't talented as a writer, and used AI to present your findings as something readable, but none of these things are true in this case.
I am quite happy with my analytical work that went into the prompt, and R1 did an adequate but not excellent job of expanding on it.
But I am done with this discussion.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
For what it's worth, I agree with you, and will plead the case with the other mods, but I do have to stand by the majority decision if it goes against it.
I raised an eyebrow at your use of an R1 comment, but in principle, I'm not against the use of AI as long as it's not low effort slop, the poster makes an effort to fact check it, and adds on substantive commentary. Which I note you did.
I agree that we're at the point where it's next to impossible to identify AI generated text when it's made with a minimum of effort. You don't even need R1 for that, Claude could pull it off, and I'm sure 4o can fool the average user if you prompt it correctly. That does require some effort, of course, and I'd rather not this place end up a corner of the dead internet, even if I can count on LLMs to be more existing that the average Reddit or Twitter user. We hold ourselves to higher standards, and talking to an actual human is an implicit goal.
Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.
More options
Context Copy link
It's been tried; as I recall ~90% noticed, 10% argued with the AI, 100% were annoyed -- and the 'experiment' was probably a big reason for the ruling before us.
I think it's time to replicate with new generation of models.
Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.
I dunno -- like all models I've observed to date, it gives me weird tl;dr vibes after about four lines, so I either skim heavily or... don't read.
(For the record, your own posts -- while often even longer -- do not have the same effect. Although I'll confess to bailing on the odd one, in which case it tends to be more over lack of time than interest.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"To have an opportunity to talk with actual people" sounds like a really low bar to clear for an internet forum. Even if your AI slop tasted exactly like the real thing, it would just be good manners to refrain from clogging our airwaves with that.
Knowing that you're talking with something sapient has an inherent value, and this value might very well go up in the coming years. I can't say I even understand why'd you think anyone would find AI outputs interesting to read.
Bizarre reaction. But I like a sincere, organically produced tantrum better than simulation of one, so I'd rank this post as higher than the one above!
Because they're intelligent, increasingly so.
The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one. At least that's my – admittedly not very charitable – interpretation of these disgusted noises. Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.
This itself eventually poses a problem: if AIs get good enough at arguing, then talking to them is signing up to be mindhacked which reduces rather than increases your worldview correlation with truth.
More options
Context Copy link
That still would not make them human, which is the main purpose of the forum, at least judging by the mods' stance in this thread and elsewhere. (I suppose in the Year of Our Lord 2025 this really does need to be explicitly spelled out in the rules?) If I want to talk to AIs I'll just open SillyTavern in the adjacent tab.
This seems like a non-sequitur. You are on the internet, there's no "physical intercourse" possible here
sadly, what does the "physical" part even mean?Far be it from me to cast doubt on your oldfag credentials, but I'll venture a guess that you're just not yet exposed to enough AI-generated slop, because I consider myself quite inundated and my eyes glaze over on seeing it in the wild unfailingly and immediately, regardless of the actual content. Personally I blame GPT, it poisoned not only the internet as a training dataset, infecting every LLM thereafter - it poisoned actual humans, who subsequently developed an immune response to Assistant-sounding writing, and not even R1 for all its intelligence (not being sarcastic here) can overcome it yet.
Unlike humans, AI doesn't do intellectual inquiry out of some innate interest or conflict - not (yet?) being an agent, it doesn't really do anything on its own - it only outputs things when humans prompt it to, going off the content of the prompt. GPTslop very quickly taught people that effort you might put into parsing its outputs far outstrips the "thought" that the AI itself put into it, and - more importantly - the effort on behalf of the human prompting it, in most cases. Even as AIs get smarter and start to actually back up their bullshit, people are IMO broadly right to beware the possibility of intellectual DDoS as it were and instinctively discount obviously AI-generated things.
More options
Context Copy link
If you really believe this - why don't you just take the next logical step and just talk to AIs full time instead of posting here?
Make them act out the usual cast of characters you interact with on here. They're intelligent, they're just as good as posters here, and you get responses on demand. You'll never get banned and they probably won't complain about LLM copypasta either. What's not to love?
If you do find yourself wanting to actually talk to humans on an Internet forum rather than to LLMs in a puppet house, hopefully it's clear why there's a rule against this.
Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.
Talking to humans has several functions for me. First, indeed, personal relationships of terminal value. Second, political influence, affecting future outcomes, and more mundane utilitarian objectives. Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.
There still is plenty of humans who have high enough perplexity and wisdom to deserve being talked to for purely intellectual entertainment and enrichment. But I've raised the bar of sanity. Now this set does not include those who have kneejerk angry-monkey-noise tier reactions to high-level AI texts.
This militates against top level AI copypasta. That doesn't develop personal relationships.
Highly unlikely that posting on the motte or talking to machines accomplishes either of these, so call it a wash. Recruiting for a cause is also against the rules, anyway.
Same as point 1. Precise knowledge and understanding usually comes from asking specific questions based on your own knowledge rather than what the LLM wants to know.
Your own reasons for posting here seem to suggest that there's no point in posting LLM content, and especially not as a top level post.
I have explained my reasons to engage with humans in principle, not in defense of my (R1-generated, but expressing my intent) post, which I believe stands on its own merits and needs no defense. You are being tedious, uncharitable and petty, and you cannot keep track of the conversation, despite all the affordances that the local format brings.
The standards of posting here seem to have declined substantially below X.
More options
Context Copy link
More options
Context Copy link
Would you mind elaborating on this? I am in the somewhat uncomfortable position of thinking that a) Superintelligence is probably a red herring, but b) AI is probably going to put me and most people I know out of a job in the nearterm, but c) not actually having much direct contact with AI to see what's coming for myself. Could you give some discription of how AI fits into your life?
I use a coding program called Windsurf. It’s like a normal text editor but you can type “Lines 45-55 currently fail when X is greater than 5, please fix and flag the changes for review” or “please write tests for the code in function Y”. You iteratively go back and forth for a bit, modifying, accepting or rejecting changes as you go.
You’re a 3D artist, right? The thing I would keep my eye on is graphics upscaling as in this photorealistic Half Life clip. What they’ve done is take the base 1990s game and fed the video output into an AI filter to make it look like photorealistic video. VERY clunky: objects appear/disappear, it doesn’t preserve art style at all, etc. but I think if well done it could reverse the ps3-era graphics bloat that made AAA game creation into such a risky, expensive proposition.
Specifically, you would give a trained AI access to the base geometry of the scene, and to a base render with PS2 era graphics so it understands the intended art style, the feel of the scene, etc. Then the AI does the work of generating a PS6+ quality image frame with all the little detail that AAA artists currently slave over like the exact pattern of scratching on a door lock or whatever.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think one should separate the technical problem from the philosophical one.
LLMs are increasingly intelligent, but still not broadly speaking as intelligent as the posters here. That is a technical problem.
LLMS are not human, and will never be human. You cannot have an AI 'community' in any meaningful sense. That is a philosophical problem.
If you care about the former, you should consider banning AI posts until they are at least as good as human posts. If the latter, you should ban AI posts permanently.
My impression is that pro-AI-ban comments are split between the two.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
From one perspective: Words are words, ideas are ideas. A good argument is a good argument, regardless of the source. If the argument is not good, that's a technical problem.
That said, many of us here in practice have an anecdotal style of writing, because (a) we aren't actually rationalists and (b) few people worth talking to actually have the time and inclination to produce think-tank style pieces; obviously there is no value in reading about the experiences of something that has no experience. There is also less satisfaction in debating with a machine, because only one of you is capable of having long-term growth as a result of the conversation.
More options
Context Copy link
More options
Context Copy link
My main objection to AI content on themotte is that it makes this place entirely pointless.
What is the difference between two people just posting AI arguments back and forth and me just going to an AI and asking that AI to play out the argument?
If you want such AIs arguing with each other, just go use those AIs. Nothing is stopping you, and in fact I'm fully in favor of you going and doing that.
This is like you showing up to a marathon race with a bicycle, and when not allowed entry you start screaming about how we are all Luddites who hate technology. No dude, its just that this whole place becomes pointless.
Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."
If this behavior was replicated by everyone, we'd end up with giant walls of text that we were all just copying and pasting into LLMs with simple prompts of "prove this fool wrong". No one reading any of it. No one changing their mind. No one offering unique personal perspectives. And thus no value in any of the discussion.
Really now?
This is what it looks like and this is how it will be used.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm sorry, as an AI language model, I don't possess personal beliefs or opinions. I apologize for any confusion.
Still, I really do find this take somewhat confusing. What it is about AI output specifically that warrants doomposting of this degree?
On the contrary, I believe that the Motte has withstood e.g. astroturfing and entryist inflitration much better than say Reddit, which has not been usable for human-to-human communication in years. The stark difference is so obvious I'm not sure I even need to state this outright. All it seemingly took was rules enforcing effort and tone, and handling verboten topics without resorting to the usual "y'all can't behave" shut-this-shit-down approach. Hell, the rules don't even prohibit being antagonistic, just overly antagonistic, and even then the OP's meltdown across this very thread shows that rules are not always applied uniformly. (To be clear, this is not a strictly bad thing.)
he says, posting here instead of /r/ssc for some strange and unrelated reason.
Hey, as long as it works. "Avoid low-effort participation" seems to filter drive-by trolls and blogspammers just fine. The extent to which the approach of outsourcing quality control to the posters themselves works may vary, but personally I feel no need to flex(?) by presenting AI outputs as my own, see no point in e.g. letting the digital golems inhabiting my SillyTavern out to play here, and generally think the doom is largely unwarranted.
As an aside, I'll go ahead and give it ~70% confidence that the first half of your post was also written by R1 before you edited it. The verbiage fits, and in my experience it absolutely adores using assorted markdown wherever it can, and having googled through a few of your posts for reference it doesn't seem to be your usual posting style.
More options
Context Copy link
Without engaging with the rest of your comment (which I'm inclined to agree with), I'm tackling this bit.
Modafinil? It's long-acting coffee as far as I'm concerned, and about as benign. I would know, I was on it, and it was a self-prescription to boot. I quit because I built up a tolerance and knew that upping doses beyond 200mg was futile. I had no issues quitting.
It has next to zero addiction potential. Patients consistently report mild euphoria once, on their very first dose, and never again no matter how much they up it. Dependency is also a non-issue in practice. You don't see junkies shooting it up on the streets, not that they'd be nodding off.
It's arson, murder and jaywalking in the flesh.
Amphetamines? Well, I do have a Totally Legitimate Diagnosis of ADHD, and while I have not had the luck of trying actual amphetamines, just Ritalin, they're not dangerous at therapeutic doses. You don't need a diagnosis of ADHD to benefit from taking them, it boosts performance for pretty much everyone, including neurotypicals or those with high conscientiousness already.
I recall Scott writing about it at length, pointing out how they're much less dangerous than popularly conceived.
https://www.astralcodexten.com/p/know-your-amphetamines
I'm all for better living through medicine, and I would, if I had a gun put to my head, say that for the modal Mottizen the benefits of taking either modafinil or therapeutic doses of outweighs the risks.
(GMC, please note that this is not medical advice, and provided under duress, I did mention being held at gunpoint. Unbelievable in a British context? Uh.. He had a very pointy umbrella)
I belong to a profession where not only is there a great demand for large amounts of focus and cognitive output, but by virtue of being medical professionals, they would have a far easier time getting prescription stimulants if they desired them.
We don't see that happening, at least nowhere I'm personally aware of, even anecdotally. A doctor on a reasonable dose of stimulants is a harder working and more attentive doctor, but there's hasn't been a red queen's race.
The close analogue to that might be med students who are tempted to take them to cope with the enormous amounts of coursework, but I have not heard of abuse at rates >> than any other class of students.
Coffee is a cognitive enhancer. Most people working regular jobs drink at least some amounts of it. This doesn't seem to strike most people as an intolerable state of affairs!
While rarer in the UK, more doctors than I would prefer were heavy smokers in India, a habit induced by the insane levels of pressure at work. This did not force all or most doctors to smoke either. And leaving aside the meek modafinil, I would expect a society where ~everyone is on prescription stims would be a healthier and happier one than where everyone smokes a pack a day.
I'm a regular user and early adopter of LLMs, I'd probably be a power user if my work flow wasn't particularly friendly for them. I still wouldn't want to use them to write comments for me, especially on the Motte. I expect that most of us here enjoy the act of crafting their own prose, and the amount of boilerplate they stand to avoid is surprisingly small.
I expect that since GPT-4, maybe even 3.5, it would have been possible for someone to slip in an account that used exclusively AI generated text, and likely not even be noticed beyond being a rather bland and boring user.
We could easily have been overrun with bots, but we haven't. I doubt that unless we end up Eternal September-ed with an OOM more new users, bot-apocalypses are not a very serious risk for the Motte as a forum.
I am a radical transhumanist, so we might very well have a difference at the level of fundamental values, at which point we can't do more than acknowledge each others opinion as valid, but not actually get closer to agreement here.
In a hypothetical world where you were diagnosed with ADHD and your parents were just as overstretched, but medication for it wasn't available, would your childhood and adolescence have been better?
I doubt it. The absence of ADHD meds don't turn parents more capable of parenting, and their existence doesn't make them worse. Being denied vyvanse wouldn't have given your parents more time to spend with you while you did your homework.
I also reject the framing that a "crutch" is a bad thing. Driving a car to a supermarket half an hour away is a "crutch" for not being willing to spend 2 hours walking. I prefer it over the alternative.
Ozempic is a crutch for not having better dietary habits by default. Why is that a bad thing? It still makes people lose weight and become healthier. A world where everyone takes it, both to reduce their obesity, and out of the pressure of everyone else being on it (a world we're approaching right now) is still a better world than everyone being fatter and unable to anything about it in practice. A similar analogy applies for cellphones and cars, society is broadly better off even though they've become de-facto necessities, even if the people who don't like them are marginalized.
There are ways for society and cultures to burn up their slack and leave everyone strictly worse off than if they had put a line in the sand, but as far as I'm concerned, stimulant meds or the use of LLMs in a reasonable manner wouldn't do the same to us.
I think the most unique and widespread-enough example I can think of with medics “misusing” a drug more than other professions would be beta-blockers prior to interviews and exams.
Interesting. I don't know if this is common outside of Japan, but it's the first time I'm hearing it.
The pharmacy next to my med school had a rather lax approach when it came to doling out controlled substances and prescription meds, even to med students. I know that personally, because I certainly asked for them (I could have brought along a valid prescription if needed, but I knew they wouldn't ask). I don't recall anyone from my cohort taking advantage, and I didn't see any obvious signs of abuse. Even in my medical career, I never heard of a doctor I personally knew or worked with admitting to abusing meds or being caught out doing so. Nobody clearly zooted on stims, or sedated from benzos or opioids.
Not that anyone is really abusing beta blockers, and you wouldn't be able to tell unless they passed out or something. Interestingly enough, I did take a few when my palpitations from my methylphenidate became overwhelming, but I was aware of minor negative effects on memory and cognition and did my best not to take them before exams. I suppose if someone has crippling anxiety, it beats the alternative!
Yeah, no disagreement — it’s as benign as it can get, really. I actually thought this sort of habit came from the West though!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why is it a problem for certain professions to require safe stimulants for the highest tier of success? Your post treats the wrongness of this idea as self evident, but I don't accept it. We require that athletes train, after all.
And there it is --- the puritanical idea that people experience unavoidable suffering because suffering is good for the soul.
More options
Context Copy link
More options
Context Copy link
Sounds like you have it all figured out.
The argument against AI in this space is still pretty simple. It's like bringing a bicycle to a fun run. If you don't want to engage in a leisure activity it makes little sense to cheat at the leisure activity when you can instead just not do it.
Using an AI to debate other people is easier than debating them yourself. But it's even easier to just not debate them in the first place.
Themotte isn't a place that matters. This isn't X or reddit or some other major social network site where millions of voters can be influenced. There is no reward for "winning" here, so the normal molochian optimization pressures don't have to apply.
I’d like to push back against this a bit. It’s my understanding that the purpose of debating in the Motte is, very politely, to force people to occupy the motte and not the bailey. That is, to smash ideas against each other very politely until all the bits that can be smashed get smashed, and only the durable bits remain.
The rules in favour of tone moderation don’t exist to make this fun per se, they exist because truth seeking isn’t compatible with bullying or outlasting your opponent. It is fun, and I like it here, but debating in the motte should be fun in the way that scientific debate is fun. I think leaning too far into “posting on the motte is a leisure activity” would be a mistake.
I’m comfortable with the new rule on AI as it stands, I think it’s threading a tricky needle fairly well. But if we find a way over time to use AI in a way that really does improve the debate, I think we should.
TLDR: in my opinion debating here is a leisure activity in the same way that MMA is a leisure activity. Likewise, there are certain serious rules that apply - you can’t shoot your opponent with a gun - but unlike karate there is no such thing as ‘cheating’. If you find a way to fight better, that’s not cheating, it’s pushing the sport forward.
More options
Context Copy link
More options
Context Copy link
Yes, with any seams showing. Obviously we can't enforce a rule we didn't detect you breaking.
Following on to @Corvos's comment, below, I would compare it to using AI to write books and short stories (something I'm aware of as someone involved in the writing and literary community) or make art. As you know, there now a bazillion people churning out AI-generated fiction and art, such that this is overwhelming a lot of traditional venues. The Kindle store is full of them, and KU authors are bemoaning the fact that not only are they competing against each other, but now they are competing against AI-published books by the thousands. There are even courses available now (by grifters) teaching you how to generate books with AI which you can then sell on Amazon.
My Facebook feed is full of "Reels" with increasingly realistic AI content, everything from Chinese fishermen dragging an improbable sea monster out of the water to bears waving down motorists to help rescue their cubs, to actresses dressed in outfits and performing in movies they never appeared in.
We can't stop it, most of it is crap, and right now, most of it is fairly easily detectable, but it's getting harder. The problem is not so much "AI is crap" (that will become less and less true) but "AI produces so much content so easily that even if you are looking for human content, it's becoming harder to find it."
The Motte is for people to talk to each other, and while I'm not terribly worried about a few clever lads titter-titter-tee-heeing to themselves about what brilliant little scamps they are for inserting AI content into their posts that the mods didn't detect, I am concerned about the sorts of discussions we're already seeing in this thread, with people using AI to sling walls of text at each other. The value in requiring you to, like, write what you researched and are citing yourself is not that AI can't do just as good a job of producing a Wikipedia-level summary, it's that we aren't here to read copy&pasted Wikipedia articles. Or copy&pasted AI content. (Most of which is still pretty boring and spiritless.) The best posters aren't just the ones who write long effort posts, it's the ones who write interestingly. Just like AI can now write an entire YA fantasy novel with a coherent plot and characters, but it will be... limp, dull, flavorless prose with no spark of genius or creativity or innovation.
@DaseindustriesLtd thinks AI is already intelligent and as good a conversationalist as a human. Well, good for him, he can chat with AIs all he likes. Elsewhere. And it may be that AIs are what doom the Motte, not all the other things people have predicting would doom the Motte for years. But for the time being, yes, we aren't instituting a hard ban on posting anything output by ChatGPT in any context, but the context had better be something that makes a point about AI itself, not just using the AI to do the work of actually typing words that you are too lazy to type yourself.
More options
Context Copy link
You're going to have to clarify that a lot, because using short quotes from AI is normal, just like quoting from Wikipedia.
The rule would have to be something like "posts must meet effort standards without the generated content"
That seems... just as bad? Maybe worse? At least when Wikipedia hallucinates it provides references.
More options
Context Copy link
More options
Context Copy link
I agree that explicitly focusing on actual humans interacting is the correct move, but I disagree that banning AI content completely is the right choice, I will back @DaseindustriesLtd here in that R1 really is just that intelligent and clears Motte standards with relative ease. I will shamelessly admit I've consulted R1 at one point to try and make sense of schizo writing in a recent thread, and it does a great job of it pretty much first try, without me even bothering to properly structure my prompt. This thread has seen enough AI slop so pastebin link to the full response if anyone's curious.
I think the downthread suggestion of confining AI-generated content to some kind of collapsible code blocks (and forbidding to use it as the main content of one's post like here: the AI might make a cogent, sound thesis on one's pet topic, but I'd still rather listen to the poster making the case themselves - I know AI can do it if I ask it!) would be the best of both worlds.
More options
Context Copy link
Actually, I would enjoy more discussion of this here, like on are slash local llama.
Same. I've been using some really basic chatgpt web apps to simplify basic research lately, and while it's amazing it seems like a small fraction of its potential. Just being able to feed it vaguely worded tip of my tongue questions and then double checking the answers is incredible.
More options
Context Copy link
Sounds like a Friday fun thread topic, or even a tinker Tuesday topic.
More options
Context Copy link
More options
Context Copy link
I think that intent and utility matters (and length!).
It's true that I could consult an AI if I wanted to, but probably not the same ones and not the same way as @DaseindustriesLtd because our minds don't work the same way. I don't want to have conversations with AI but I'm quite happy to have conversations with human centaurs augmented by AI.
Basically this.
At the very least, I would argue for being somewhat open now and seeing how things play out for the next 6 months.
I can't stop people from going and consulting AI. I did say in the original post, that using it as a sort of sanity check or impromptu polling seems fine.
I'm personally not very interested in talking to the "centaurs" as you describe them (human centaurs seems redundant, unless you mean human legs and horse torso). I think there is a value in having another human brain process your words and spit back a disagreement about those words. If they are offloading the processing and the output to an AI they have just become a bad/slow interface for that AI.
I think we are basically at AGI right now. So hold the gates as long as we can and enjoy this space until the internet as we know it is gone in a flood of digital minds.
'Centaur' is sometimes used to describe an AI/human merger or collaboration. Half human, half machine, as it were. So, for example, a human using an AI for digging up sources / fact checking / style improvement is sometimes called a centaur. Anything where a human is still a significant part of the process.
I think it's wholly fair not to like AI writing; there are users I don't engage with either. I would merely ask the mods to be careful before they ban things that don't interest them, and to use a scalpel rather than a hammer where possible.
For example, I would agree with banning this, but in my opinion we should ban it because it's gish galloping not because it's AI. We should penalise bad AI writing for the same way we would penalise bad human writing: it's tedious and prevents good discussion.
I don't, oddly enough, which is perhaps why I'm more enthusiastic than you are. AIs have certain idiosyncracies and weaknesses that cripple them in important ways, and they need a human hand on the tiller.
I know what you meant with centaur. I just thought it was redundant to say "human centaur".
Penalizing Gish Gallop specifically is hard. People may legitimately have many questions or objections to a specific point. It's just far more obvious of a problem when you have an AI churning out text that.
Fair.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It might be worth putting this in the rules of the CW posts.
Personally, I think that using AI on themotte is bad, mentioning it is ok (if it is short and to the point). So if a comment about an AI and its behavior in a CW context ("Deepseek claims XX, this shows that the CCP is willing ..."), that is fine with me. If it is something the poster could have researched themselves, then it should mostly be verboten (or at the very least highly rate-limited and restricted to known posters). Anyone can make a motte-bot which writes more text than the real users together, and I do not think any human would like to read that (and as you mentioned, if that is their kink, they can always ask an LLM directly.)
More options
Context Copy link
Full agreement on my part. It's understandable that people are enthusiastic about this new technology, but ultimately if I wanted to read chatbot drivel I could order some up myself. I come to the motte to read what intelligent people have to write.
More options
Context Copy link
FWIW, if this rule is going to be enforced (which I am fine with) I do think it should be written. And while I am at it, I think we're probably all smart enough here to understand the difference between having the AI write your posts for you and quoting something relevant or humorous that is AI-generated, but I think it would be helpful for the rule to say that rather than just "No AI Content" (unless the community find even that objectionable, but I've never noticed anyone getting moderated for that or even irked by it). My .02.
More options
Context Copy link
Yes, please. Posting AI slop is incredibly obnoxious. It adds nothing of value or even interest, and comes off like someone thinks that talking to their interlocutor isn't worth their time. It is maximum cringe.
More options
Context Copy link
Thank you. The moment I see a bot quoted, whether a conversation, an essay, or even someone using a bot as a substitute for Wikipedia or to check facts, I stop reading.
I would hope that the point of a forum like this is for people to talk to each other. Not to vacuous robotic garbage.
I've noticed this myself. Actually, I'd like to suspend the rules so someone can do a single-blind test with AI-written posts to see if it's psychosomatic on my part.
More options
Context Copy link
I like reading someone else's AI output, not as a Wikipedia fact check, but add a Wikipedia summary. "What's that concept I haven't heard of before, or that obscure historical figure, or event?"
Anything longer than a quick blurb and I'm right back with you.
I can see the value of quick explanatory blurbs, but I think in my case I just don't trust AIs or bots to accurately report factual information. Reading the AI summary would then make it necessary for me to look up the AI summary's claims in order to establish whether they're true or not, and at that point I might as well just skip the AI summary entirely and research it myself. There is no value gain from the AI, in either time saved or information received.
More options
Context Copy link
More options
Context Copy link
It also tends to make my eyes glaze over. It just has such a boring style. Like I wonder if its specifically selecting for not being readable by people with normal attention spans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link