site banner

The Great Fragmentation: A Proposal for Organized Intellectual Combat in the Age of AI


							
							

Tagline: Honestly, I’m just a crank theorist. My ideas are not to be consumed but critiqued. I’m not your guru.


The Phenomenon

Something strange is happening online: the number of people declaring “my framework” or “my theory” has exploded. This isn’t just a vibe. Google Trends shows that searches for “my framework” and “my theory” were flat for years, only to surge by several hundred percent starting in mid-2024. Crucially, searches for “framework” or “theory” without the personal qualifier show no such spike. The growth is in people creating theories, not consuming them.

The timing is suspiciously precise: it lines up with mass adoption of high-capability LLMs. Correlation isn’t causation, but the coincidence is hard to dismiss. If skeptics want to deny an AI connection, the challenge is to explain what else could drive such a sudden, specific change.


The Mechanism

Why would AI trigger a flood of personal theorizing? The answer lies in shifting cognitive bottlenecks.

Before AI, the hard part was finding information. Research meant digging through books, databases, or niche forums. Today, access is trivial. LLMs collapse the cost of retrieval. The new bottleneck is processing: too much information, too quickly, across too many domains.

Human working memory hasn’t changed. Overload pushes the brain to compress complexity by forming schemas. In plain terms: when faced with chaos, we instinctively build frameworks. This is not a lifestyle choice or cultural fad. It’s a neurological efficiency reflex. AI simply raises the pressure until the reflex fires everywhere at once.


The Output

The result is not just more theories, but more comprehensive theories. Narrow, domain-specific explanations break down under cross-domain overload. Faced with physics, psychology, and politics all colliding, the brain reaches for maximally reductive explanations — “one framework to rule them all.”

LLMs supercharge this. They take vague hunches and return them wrapped in the rhetoric of a polished dissertation. That creates a feedback loop: intuition → AI refinement → stronger psychological investment → more theorizing. Hence the Cambrian explosion of amateur ToEs.


The Crisis

Our validation systems can’t keep up. Peer review moves in years. AI-assisted framework building moves in hours. That mismatch means traditional filters collapse.

The effect looks like a bubble. The intellectual marketplace floods with elaborate, coherent-sounding theories, but most lack predictive power. The signal-to-noise ratio crashes. Without new filters, we risk epistemic solipsism: every thinker locked in a private universe, no common ground left.


The Proposal

Instead of hand-waving this away, we should organize it. Treat the proliferation of frameworks as raw material for a new kind of intellectual tournament.

Step one is standardized documentation. Any serious framework should state its axioms, its scope, and its falsification criteria. No vagueness allowed.

Step two is cross-framework testing. Theories shouldn’t be allowed to stay safe inside their own silo. A physics-first framework must say something about mind. A consciousness-first framework must say something about neuroscience. Only under cross-domain stress do weaknesses appear.

Step three is empirical survival. Theories that make it through cross-testing must generate novel, testable predictions. Elegance and persuasiveness are irrelevant; predictive success is the only arbiter.


The Invitation

This essay is itself a framework, and so must submit to the same rules. If you think my analysis is wrong, bring a stronger account of the data. If you have a better framework, state its axioms and falsifiers, and let it face others in open combat.

If this interests you, I'd be happy to collaborate on defining the rules for disqualifying directly any framework (I have some criteria ready to be debated).

4
Jump in the discussion.

No email address required.

There's a psychological phenomenon in which people confuse access to information with information that they know. So they will say "I know how to do X" even if they can't do X, as long as they know where to find information on who to do X (a book, Google, personal notes, etc). In the same way, people probably confuse the abilities of AI with their own knowledge and their own skills.

I have to disagree that access is trivial today. I can find nothing much of value on the internet.

Think about it, if you have a new theory, it's not already common knowledge, but all you can find is common knowledge, and anything which goes against common knowledge is censored or pruned, which is why finding such has gotten almost impossible. Furthermore, LLMs are only competent at common tasks, so the further you get in a field, the less useful LLMs become. All the best information is necessarily rare, and both search engines and LLMs are made to filter out the rare.

Your post did give me something important to think about, though! I thought that we were getting more systematic and materialistic in the western world, categorizing people and misinterpreting labels as being concrete pieces of reality, because of our scientism. It did not occur to me that it could be a natural consequence of people being bombarded by information. Still, people seem to think in different ways in Asia, are they really consuming less information than us?

Lastly, I take multidisciplinary theories to be a natural outcome of high intelligence, I don't think it can occur naturally very often, since most people simply cannot see abstract relationships across disciplines. Are you not calling yourself "crank" simply to beat other people to it? Because you've been call crazy enough times to doubt yourself? Because, like I said, other people fail to understand you. The only situation I can think of where stupid people connect seemingly unrelated things is skizophrenia, and the theories of skizophrenics are usually pretty poor.

Do you know the book "The Master and His Emissary"? According to the author of this book, a cultural shift in the west has caused us to value the left hemisphere's processing of the world, at the cost of the right (holistic, contextual, connected to lived reality). I fully agree with this observation, but I'm not sure which reason is correct. I haven't read the book, but like me, the author probably calls the effect cultural because it doesn't seem to occur in Asia. Interestingly enough, skizophrenia is consistent with left-hemisphere dominant thinking. I personally think that the increase in autism diagnoses might be related as well.

I for one welcome solipsisms. I'm tired of "the consensus" eating everything, and every intellectual community asking me for a "source" the second I come up with any original ideas, and dismissing whatever I say unless I can find an authority which came to the same conclusion. But I also predict that this effect you're afraid of will never occur - we will experience the exact opposite. Everything tends towards homogeneity (the first I've seen notice it is Nietzsche), there's no generative power of uniqueness anymore, LLMs literally lack the ability to generate uniqueness, and society

I think you find great enjoyment in thinking, but I have done enough of it to realize that it's similar to day-dreaming. It's not useful, it's not healthy. Even if you came up with a workable ToE, it wouldn't benefit the world since the world is already too 'legible'.