Tagline: Honestly, I’m just a crank theorist. My ideas are not to be consumed but critiqued. I’m not your guru.
The Phenomenon
Something strange is happening online: the number of people declaring “my framework” or “my theory” has exploded. This isn’t just a vibe. Google Trends shows that searches for “my framework” and “my theory” were flat for years, only to surge by several hundred percent starting in mid-2024. Crucially, searches for “framework” or “theory” without the personal qualifier show no such spike. The growth is in people creating theories, not consuming them.
The timing is suspiciously precise: it lines up with mass adoption of high-capability LLMs. Correlation isn’t causation, but the coincidence is hard to dismiss. If skeptics want to deny an AI connection, the challenge is to explain what else could drive such a sudden, specific change.
The Mechanism
Why would AI trigger a flood of personal theorizing? The answer lies in shifting cognitive bottlenecks.
Before AI, the hard part was finding information. Research meant digging through books, databases, or niche forums. Today, access is trivial. LLMs collapse the cost of retrieval. The new bottleneck is processing: too much information, too quickly, across too many domains.
Human working memory hasn’t changed. Overload pushes the brain to compress complexity by forming schemas. In plain terms: when faced with chaos, we instinctively build frameworks. This is not a lifestyle choice or cultural fad. It’s a neurological efficiency reflex. AI simply raises the pressure until the reflex fires everywhere at once.
The Output
The result is not just more theories, but more comprehensive theories. Narrow, domain-specific explanations break down under cross-domain overload. Faced with physics, psychology, and politics all colliding, the brain reaches for maximally reductive explanations — “one framework to rule them all.”
LLMs supercharge this. They take vague hunches and return them wrapped in the rhetoric of a polished dissertation. That creates a feedback loop: intuition → AI refinement → stronger psychological investment → more theorizing. Hence the Cambrian explosion of amateur ToEs.
The Crisis
Our validation systems can’t keep up. Peer review moves in years. AI-assisted framework building moves in hours. That mismatch means traditional filters collapse.
The effect looks like a bubble. The intellectual marketplace floods with elaborate, coherent-sounding theories, but most lack predictive power. The signal-to-noise ratio crashes. Without new filters, we risk epistemic solipsism: every thinker locked in a private universe, no common ground left.
The Proposal
Instead of hand-waving this away, we should organize it. Treat the proliferation of frameworks as raw material for a new kind of intellectual tournament.
Step one is standardized documentation. Any serious framework should state its axioms, its scope, and its falsification criteria. No vagueness allowed.
Step two is cross-framework testing. Theories shouldn’t be allowed to stay safe inside their own silo. A physics-first framework must say something about mind. A consciousness-first framework must say something about neuroscience. Only under cross-domain stress do weaknesses appear.
Step three is empirical survival. Theories that make it through cross-testing must generate novel, testable predictions. Elegance and persuasiveness are irrelevant; predictive success is the only arbiter.
The Invitation
This essay is itself a framework, and so must submit to the same rules. If you think my analysis is wrong, bring a stronger account of the data. If you have a better framework, state its axioms and falsifiers, and let it face others in open combat.
If this interests you, I'd be happy to collaborate on defining the rules for disqualifying directly any framework (I have some criteria ready to be debated).

Jump in the discussion.
No email address required.
Notes -
??? Before internet, maybe.
It won't. The average human is spiritually, cognitively, and creatively empty.
Vast majority of people still haven't produced an original artistic work even with the availability of AI art tools. They have no motivation or desire to do so. So it goes too for the construction of "frameworks".
Finding high-quality, reliable information online was and remains difficult. That's one reason this place exists, to stress test the information people find online. As an academic I found literature reviews to be pretty grueling tasks even with Google Scholar and sci-hub at my disposal (something LLMs are actually getting pretty damn good at). Even finding the sort of weird and mixed-quality information that births crank theories had become quite a bit harder over the last decade, as search engines decided to promote "authoritative content" over giving the user what they're asking for for good or ill.
My gripe with crank theories of everything is basically similar to my gripe with a lot of rationalism, which has largely escaped crank status: there is an immense tradition of theory out there, and if you don't put in the years or decades required to study it you will at best be making new mistakes, but more likely making ones decades or centuries old. Plenty of great philosophers have said "everybody before me was wrong"; none of them arrived at that conclusion without exhaustive study of the tradition (yes, even Wittgenstein).
More options
Context Copy link
More options
Context Copy link