site banner

The Great Fragmentation: A Proposal for Organized Intellectual Combat in the Age of AI


							
							

Tagline: Honestly, I’m just a crank theorist. My ideas are not to be consumed but critiqued. I’m not your guru.


The Phenomenon

Something strange is happening online: the number of people declaring “my framework” or “my theory” has exploded. This isn’t just a vibe. Google Trends shows that searches for “my framework” and “my theory” were flat for years, only to surge by several hundred percent starting in mid-2024. Crucially, searches for “framework” or “theory” without the personal qualifier show no such spike. The growth is in people creating theories, not consuming them.

The timing is suspiciously precise: it lines up with mass adoption of high-capability LLMs. Correlation isn’t causation, but the coincidence is hard to dismiss. If skeptics want to deny an AI connection, the challenge is to explain what else could drive such a sudden, specific change.


The Mechanism

Why would AI trigger a flood of personal theorizing? The answer lies in shifting cognitive bottlenecks.

Before AI, the hard part was finding information. Research meant digging through books, databases, or niche forums. Today, access is trivial. LLMs collapse the cost of retrieval. The new bottleneck is processing: too much information, too quickly, across too many domains.

Human working memory hasn’t changed. Overload pushes the brain to compress complexity by forming schemas. In plain terms: when faced with chaos, we instinctively build frameworks. This is not a lifestyle choice or cultural fad. It’s a neurological efficiency reflex. AI simply raises the pressure until the reflex fires everywhere at once.


The Output

The result is not just more theories, but more comprehensive theories. Narrow, domain-specific explanations break down under cross-domain overload. Faced with physics, psychology, and politics all colliding, the brain reaches for maximally reductive explanations — “one framework to rule them all.”

LLMs supercharge this. They take vague hunches and return them wrapped in the rhetoric of a polished dissertation. That creates a feedback loop: intuition → AI refinement → stronger psychological investment → more theorizing. Hence the Cambrian explosion of amateur ToEs.


The Crisis

Our validation systems can’t keep up. Peer review moves in years. AI-assisted framework building moves in hours. That mismatch means traditional filters collapse.

The effect looks like a bubble. The intellectual marketplace floods with elaborate, coherent-sounding theories, but most lack predictive power. The signal-to-noise ratio crashes. Without new filters, we risk epistemic solipsism: every thinker locked in a private universe, no common ground left.


The Proposal

Instead of hand-waving this away, we should organize it. Treat the proliferation of frameworks as raw material for a new kind of intellectual tournament.

Step one is standardized documentation. Any serious framework should state its axioms, its scope, and its falsification criteria. No vagueness allowed.

Step two is cross-framework testing. Theories shouldn’t be allowed to stay safe inside their own silo. A physics-first framework must say something about mind. A consciousness-first framework must say something about neuroscience. Only under cross-domain stress do weaknesses appear.

Step three is empirical survival. Theories that make it through cross-testing must generate novel, testable predictions. Elegance and persuasiveness are irrelevant; predictive success is the only arbiter.


The Invitation

This essay is itself a framework, and so must submit to the same rules. If you think my analysis is wrong, bring a stronger account of the data. If you have a better framework, state its axioms and falsifiers, and let it face others in open combat.

If this interests you, I'd be happy to collaborate on defining the rules for disqualifying directly any framework (I have some criteria ready to be debated).

4
Jump in the discussion.

No email address required.

I don’t think you need step 2. A theory that passes the gates of "makes novel, interesting, and falsifiable predictions" and "and those predictions end up panning out" is already rare enough that the volume will remain manageable even with the deluge of AI slop. Most AI slop frameworks won't even pass the first of those two gates, and that's the easy one.

I disagree. You are right that it is the most efficient way to filter out quickly for 99.999% of ai slop. But my goal isn't just to have a winning #1 theory as fast as possible, my goal is building a movement or a community out of these crank theorists (which I myself identify as) and collaborate on refining, stress testing and ultimately strengthening one or more together in the purpose of eventually formulating a proposal to the scientific community.

I believe that there is genuine mass yearning for unification, and tapping into the unprecedented surge I hypothesize that there is real signal buried within the noise. The goal is to build a collaborative, not just a competitive, ecosystem.