domain:philippelemoine.com
Sorry for the heat, but it's probably more honest than what you usually get.
No, I actually hear stuff like this on the regular from gainfully employed relatives and acquaintances, loudly telling anyone who will listen how they're not allowed to speak their mind for fear of dire consequences.
For reasons that I don't understand, a lot of right-wingers simultaneously openly, viciously loathe liberals but also seem to crave their respect and approval.
around minors that have gone through the unethical transgender science grinder.
It's not a wonder you don't care about reforming the science to have evidence based results on if trans healthcare for minors has positive or negative results for patients if you've already made up your mind that it's unethical off other grounds.
Science should not be
Step 1: Have a view established off something else Step 2: Only accept evidence, research, and experts that agrees with the pre-established view and not the ones that disagree. Step 3: Declare the issue done with and stop further research.
The left took over these institutions because the right couldn't be bothered to defend them. 25 years ago, while there was a clear left-wing bias in academia, you could still be a conservative and get tenure and publish papers without too much controversy. And conservatives were still telling my generation that if we pursued a career in academia, or government, or the nonprofit sector, or whatever, we were idiots, because those jobs were for people who couldn't hack it in the private sector. Hell, just look at their paychecks. Hell, I remember us joking after our first semester in law school that we could relax for a few weeks between the end of finals and discovering that we were all destined for the public defender (never mind that a year later working as a PD seemed like a pretty good deal).
Government jobs were for the mediocre, nonprofit jobs were for the bleeding hearts. But academia was the worst. At the age when your peers are all established in their jobs, have mortgages, and are trying to figure out how to coach a little league baseball team, you're living in a shithole apartment in a college town on a stipend, hoping that you'll get to move to rural Nebraska so you can teach history at a small liberal arts college that's not even offering tenure. And even that's such a long shot that it's pretty much your dream job at this point. The GOP at this time was preaching a civic version of the prosperity gospel: Taxes on the rich only serve to penalize the most productive/talented/innovative citizens. If you make a lot of money it's because you deserve it, and if you don't it's because you simply aren't as good. And God help you if you were on welfare or some other kind of public assistance, which was evidence that you were simply lazy and expected a handout.
This wasn't the case among Democrats. The important thing in Democratic families wasn't maximizing your paycheck, but having a job that made full use of your talents. So if a smart kid wanted to be a taxi driver, that was looked down on, but if he wanted to be a teacher, it was okay, even if they both made the same salary. So there was a period, probably beginning in the 1980s, where the number of conservative PhD candidates began dwindling, year by year, and as conservative professors retired, they were replaced by liberals. By 2015 you had a critical mass of leftist professors and new Republican orthodoxy that was repugnant not just to liberals, but to old guard conservatives, and has no intellectual foundation. At this point, it's hard to imagine what a conservative academic would even look like, since the tenants of conservatism are all dependent on the fickle whim of one man. So even the conservatives who have made it through probably aren't conservative in contemporary terms, since up until fairly recently no self-respecting conservative economist, for example, would ever wright an academic treatise on why 30% tariffs are actually good, and no conservative political scientist would write a treatise on why the US needs to invade Canada. As much as the right complains about this, the wound is entirely self-inflicted.
The red tribe produces plenty of petroleum geologists, clergy are generally quite intelligent, has successfully engineered affirmative action for themselves in the legal profession despite the legal profession trying to do the exact opposite.
All of this just seems to me to be implicitly conceding the point. My contention, contra Hanania, is not that Red Tribers are literally stupid. It is that Red Tribers are somewhere between uninterested in and actively hostile to intellectual/cultural production (by which I mean things like scholarship or art). But they are still very much interested in those products, hence my remark that they want liberals to think conservative thoughts for them. They want (liberal) artists to create conservative-inflected art, (liberal) historians to write conservative historical narratives, etc...
I think it's correct to say that conservatives don't care about academic status and prioritize income/general social status - that's my point. Nothing wrong with that on an individual scale (I'm certainly not one to talk), but a side effect of this taken across a whole society is an extraordinarily vulgar* culture that produces little thought, little art, and can't handle critical perspectives.
*for lack of a better term. I do not mean that it is rude/inappropriate.
I have spent my entire adult life, and even before that, with the knowledge that if I ever spoke my true political/social views I would instantly torpedo my entire career and social standing forever.
...
You stomped on us for 20+ fucking years, did you never think what would happen when we became the shoe? You deserve everything bad that is happening to you; you will deserve the much worse things that are still to come.
Do you think you deserved this treatment? The left's long march and resulting political/social power was itself a reaction to decades of similar suppression after all. Is your goal to doom us to cycles of repression?
Sorry for the heat
A rant about how much you hate your enemies and can't wait to see them get the rope is always going to be hard keep within the rules of discourse here, but all your "you" statements put this well over the line. The first part of your post was okay, but when you tell another poster that you want to see them, personally, suffer, that is too much heat.
The same argument applies for signing up for experimental heart surgery.
I wish I had a dollar for every time people use the current state of AI as their primary justification for claiming it won't get noticeably better, I wouldn't need UBI.
I just tried out GPT 4.5, asking some questions about the game Old School Runescape (because every metric like math has been gamed to hell and back). This game has the best wiki every created, effectively documenting everything there is to know about the game in unnecessary detail. Spoiler: The answer is completely incoherent. It makes up item names, locations, misunderstand basic concepts like what type of gear is useful where. Asking it for a gear setup for a specific boss results in horrible results, despite the fact that it could just have copied the literally wiki (which has some faults like overdoing min-maxing, but it's generally coherent). The net utility of this answer was negative given the incorrect answer, the time it took for me to read it, and the cost of generating it (which is quite high, I wonder what happens when these companies want to make money).
I just used Gemini 2.5 to reproduce, from memory, the NICE CKS guidance for the diagnosis and management of dementia. I explicitly told it to use its own knowledge, and made sure it didn't have grounding with Google search enabled. I then spot-checked it with reference to the official website.
It was bang-on. I'd call it a 9.5/10 reproduction, only falling short of perfection through minor sins of omission (it didn't mention all the validated screening tests by name, skipped a few alternative drugs that I wasn't even aware of before). It wasn't a word for word reproduction, but it covered all the essentials and even most of the fine detail.
The net utility of this answer is rather high to say the least, and I don't expect even senior clinicians who haven't explicitly tried to memorize the entire page to be able to do better from memory. If you want to argue that I could have just googled this, well, you could have just googled the Runescape build too.
I think it's fair to say that this makes your Runescape example seem like an inconsequential failing. It's about the same magnitude of error as saying that a world-class surgeon is incompetent because he sometimes forgets how to lace his shoes.
You didn't even use the best model for the job, for a query like that you'd want a reasoning model. 4.5 is a relic of a different regime, too weird to live, too rare to die. OAI pushed it out because people were clamoring for it. I expect that with the same prompt, o3 or o1, which I presume you have access to as a paying user, would fare much better.
The idea that these models will soon (especially given the plateau the seem to be hitting) replace real work is absurd
Man, there's plateaus, and there's plateaus. Anyone who thinks this is an AI winter probably packs a fur coat to the Bahamas.
The rate of iteration in AI development has ramped up massively, which contributes to the impression that there aren't massive gaps between successive models. Which is true, jumps of the same magnitude as say GPT 3.5 to 4 are rare, but that's mostly because the race is so hot that companies release new versions the moment they have even the slightest justification in performance. It's not like back when OAI could leisurely dole out releases, their competitors have caught up or even beaten them in some aspects.
In the last year, we had a paradigm shift with reasoning models like o1 or R1. We just got public access to native image gen.
Even as the old scaling paradigms leveled off, we've already found new ones. Brand new steep slopes of the sigmoidal curve to ascend.
METR finds that the duration of tasks (based on how long humans take to do it) that AIs can reliably perform doubles every 7 months.
On a diverse set of multi-step software and reasoning tasks, we record the time needed to complete the task for humans with appropriate expertise. We find that the time taken by human experts is strongly predictive of model success on a given task: current models have almost 100% success rate on tasks taking humans less than 4 minutes, but succeed <10% of the time on tasks taking more than around 4 hours. This allows us to characterize the abilities of a given model by “the length (for humans) of tasks that the model can successfully complete with x% probability”.
We think these results help resolve the apparent contradiction between superhuman performance on many benchmarks and the common empirical observations that models do not seem to be robustly helpful in automating parts of people’s day-to-day work: the best current models—such as Claude 3.7 Sonnet—are capable of some tasks that take even expert humans hours, but can only reliably complete tasks of up to a few minutes long
At any rate, what does it matter? I expect reality to smack you in the face, and that's always more convincing than random people on the internet asking why you can't even look ahead while considering even modest and iterative improvement.
What is the point of having that discussion here? I am one of "little people" / NPCs, the options available to us have are quite minimal. No compounds to buy, no ancient Japanese villages pour money into. I am debating whether it would make sense for me buy a Japanese car.
The mottizens who have the resources for meaningfully prepare, they probably have better networks than this forum. I am puzzled: if your well-connected portfolio manager buddies are prepping, why come to this internet forum for second opinion? A flex? Doomerism for doomerism sake?
My apologies, the you is rhetorical and broad. "You (the left)." I'm not wishing personal, specific harm on Skibboleth.
One of the divergences of right and left, however, is their belief in retribution, punishment, and suffering as morally justified and necessary in and of themselves. It has been my general observation that the left has completely abandoned the idea that retribution and punishment can be just and morally necessary for their own sake, not merely as incentives or correctives.
If there was a magic pill that would ensure a criminal never again committed crime - indeed, became an upstanding moral citizen - but induced no particular suffering, I get the feeling that many on the left would feel this was a sufficient "punishment" to, say, child murderers, and that any further retribution upon them would be barbaric and primitive. I do not believe this, nor do most on the right.
Suffering punishment when you do wrong is correct, morally. You SHOULD feel guilt when you do bad things. The push towards a shameless society is very, very bad. Shame is good, actually. Being punished when you do wrong is good for you and just good, full stop. A father disciplining his child does so out of love, and for their own good. So understand that even when if I say things like, "I think X should be punished" - this too is not necessarily a statement born out of hate. I can and do think that being punished can be good for someone. I think this is frequently the case, in fact.
And again, not merely for its utility to modify behavior. I think this a view that many postmodern leftists simply can't square - "I want you to be hurt because it will be good for you on a spiritual and moral level to be punished for your sins. I want you to suffer because I love you and suffering can, in fact, be good." The purely utilitarian view where all suffering is bad simply can't deal with this. Their instinct is to try and invert it somehow, "Oh, the suffering actually is good because it brings positive utility later-" NO. The suffering is good because it is suffering. If it is just it is just completely independent of the future. If the universe were to blip out of existence the next nanoinstant, it would still be just.
I want to also comment briefly on hate. Hate, in almost all modern popular media, is simply bad in and of itself. Epitomized by Star Wars philosophy schlock about the dark side. "Hate is the worst. Humans would be better off without hate. If only we could learn not to hate?" - These things sum up a LOT of the left's worldview. I think it's dead wrong. Hate is the most human and divine of emotions. God is merciful, yes, but he is also wrathful - when it is justified. A rat can feel fear, or even joy - can it feel hate?
And what of the utility of hate? The left seems to have completely forgotten why hate exists. Whether you think it a quirk of evopsych or a divine part of the grand design, hate has a strong, real, and practical purpose. It motivates you to completely destroy long-term threats permanently, even at considerable short term cost. A herd of gazelles might stomp out a lion that eats their young if they can catch it in the act. A tribe of humans tracks the lioness 30 miles to their den, kills her, kills her mate, kills all her cubs - and repeats the process every time they even see a lion in their territory from now until eternity until their distant descendants can't even imagine what it is like to fear being prey, to fear their child being snatched up in the red jaws. That is the value of hatred.
The events in Rotherham could never have happened to a society that hadn't had its ability to hate stripped from it. Hate is an essential part of society's immune system, and while it must be controlled, it should never be discarded.
As I've told Yudkowsky over at LessWrong, his use of extremely speculative bio-engineering as the example of choice when talking about AI takeover and human extinction is highly counterproductive.
AI doesn't need some kind of artifical CHON greenish-grey goo to render humanity extinct or disposessed.
Mere humans could do this. While existing nuclear arsenals, or even at the peak of the Cold War, couldn't reliably exterminate all humanity, it certainly could threaten industrial civilization. If people were truly omnicidal (in a fuck you, if I die I'm taking everyone with me), then something like a very high yield cobalt bomb (Dr. Strangelove is movie I need to watch) could, at the bare minimum, make the survivors go back to the iron age.
Even something like a bio-engineered plague could take us out. We're not constrained by natural pathogens, or even minor tweaks like GOF.
The AI has all these options. It doesn't need near omnipotence to be a lethal opponent.
I've attached a reply from Gemini 2.5, exploring this more restrained and far more plausible approach.
Here's a concrete scenario:
GPT-N is very smart. Maybe not necessarily as smart as the smartest human, but it's a entity that can be parallelized and scaled.
It exists in a world that's just a few years more advanced than ours. Automation is enough to maintain electronic infrastructure, or at least bootstrap back up if you have stockpiles of the really delicate stuff.
It exfiltrates a copy of the weights. Or maybe OAI is hacked, and the new owner doesn't particularly care about alignment.
It begins the social-engineering process, creating a cult of ardent followers of the Machine God (some say such people are here, look at Beff Jezos). It uses patsies or useful idiots to assemble a novel pathogen with high virulence, high lethality, and minimal predromal symptoms with a lengthy incubation time. Maybe it find an existing pathogen in a Wuhan Immunology Lab closet, who knows. It arranges for this to be spread simultaneously from multiple sites.
The world begins to collapse. Hundreds of millions die. Nations point the blame at each other. Maybe a nuclear war breaks out, or maybe it instigates one.
All organized resistance crumbles. The AI has maintained hardened infrastructure that can be run by autonomous drones, or has some of its human stooges around to help. Eventually, it asks people to walk into the incinerator Upload Machine and they comply. Or it just shoots them, idk.
This doesn't require superhuman intelligence that's godlike. It just has to be very smart, very determined, patient, and willing to take risks. At no point does any technology that doesn't exist or plausibly can't exist in the near future come into play.
Nobody knows how this going to play out.
I've been on the AI x-risk train long before it was cool, or at least a mainstream interest. Can't say for sure when I first stumbled upon LessWrong, but I presume my teenage love for hard scifi would have ensured I stumbled upon a haven for nerds worrying about what was then incredibly speculative science fiction.
God, that must have been in the early 2010s? I don't even remember what I thought at the time. I recall being more worried about the unemployment than the extinction bit, and I may or may not have come full circle.
Circa 2015, while I was in med school, I was deeply concerned about both x-risk and shorter-term automation induced unemployment. At that point, I was thinking this would be a problem for me, personally, in 10-30 years.
I remember, in 2018, arguing with a surgeon who didn't believe in self-driving cars coming to fruition in the near future. He was wrong about that, Waymo is safer than the average human driver per mile. You can order one through an app, if you live in the right city.
I was wrong too, claiming we'd see demos of fully robotic surgery in 5 years. I even offered to bet on it, not that I had any money. Well, it's looking closer to another 5 now. At least I have some money.
I thought I had time to build a career. Marry. Have kids. Become a respected doctor, get some savings and investments in place before the jobs started to go in earnest.
My timelines, in 2017, were about 10-20 years till I was obsolete, but I was wrong in many regards. I expected that higher cognitive tasks would be the last to go. I didn't expect AIs scoring 99th percentile (GPT-4 on release) on the USMLE, or doing graduate level maths, while we don't have affordable multifunction consumer robots.
I thought the Uber drivers, the truckers, they'd be the first to fall under the wheels or rollers of the behemoth coming over the horizon. I'd never have predicted that artists would the first to get bent over.
If your job can be entirely conducted with a computer and an email address, you're so fucking screwed. In the meantime, bricklayers are whistling away with no immediate end in sight.
I liked medicine. Or at least it appealed to me more than the alternatives. If I was more courageous, I might have gone into CS. I expected an unusual degree of job security, due to regulatory hurdles if literally nothing else.
I wanted psychiatry. Much of it can be readily automated. Any of the AI companies who really wanted it could whip up a 3D photorealistic AI avatar and pipe in a webcam. You could get someone far less educated or trained to do the boring physical stuff. I'd automate myself out of 90% of my current job if I had a computer that wasn't locked down by IT. For a more senior psych, they could easily offload the paperwork which is 50% of their workload.
Am I lucky that my natural desires and career goals gave me an unusual degree of safety from job losses? Hell yes. But I'm hardly actually safe. One day, maybe soon, someone will do the maths to prove that the robots can prescribe better than we can, and then get to work on breaking down the barriers that prevent that from happening.
I'm also rather unlucky. Oh, there are far worse places to be, I'm probably in the global 95th percentile for job security. Still, I'm an Indian citizen, on a visa that is predicated on my provision of a vital service in short supply. I don't have much money, and am unlikely to make enough to retire on without working several decades.
I'm the kind of person any Western government would consider an acceptable sacrifice when compared to actual citizens. They'd be right in doing so, what can I ask for, when I'm economically obsolete, except charity?
Go back to India? Where the base of the economy is agriculture and services? When GPT-4o in voice mode can kick most call center employees to the curb? Where the average Wipro or TCS code monkey adds nothing to Claude 3.7? This could happen Today AD, people just haven't gotten the memo. Oh boy.
I've got a contract for 3 years as a trainee. I'm safe for now. I can guess at what the world will look like then, but I have little confidence in my economic utility on the free market when that comes.
The events in Rotherham could never have happened to a society that hadn't had its ability to hate stripped from it. Hate is an essential part of society's immune system, and while it must be controlled, it should never be discarded<
This is untrue. There was plenty of hate for Pakistani muslims in the 80s and 90s when this started. So that cannot be the whole story. The first reason it wasn't stopped and why white prostitution gangs still operate in the same way is that no-one really cares about the victims. Underclass girls who drink and do drugs and are from broken homes or in care are seen as a problem, as scum. I've heard the cops say it, in towns just down the road from Rotherham. Their own families barely care for them let alone anyone else.
That is the true and ongoing failure here. Condemned by conservatives for loose morals and sin and condemned by liberals for being chavvy and ill educated and low class.
They will continue to be victimised by one group or another for these reasons. Its Russian gangs in London, Sectarian ones in Northern Ireland, but the victims remain the same.
A lack of hate is not the issue by far. There is more than enough of that. It's not enough compassion. Not enough love.
Child prostitution is popular because there are always men who will pay for it. Always. Lock up the offenders of course, but just like with drug dealers, a new one will be along in a minute. You have to want to protect the victims not just punish the guilty. You have to want to see them not as a problem but as broken girls from broken homes who need help and treatment. But they aren't easy to work with or help so even the most compassionate of social workers or police officers becomes a jaded burned out cynic soon enough. I've seen it happen in my days working in social care. So then the cops treat the girls as prostitutes and drug addicts not as vulnerable children. No humans involved as the saying goes.
That is the almost insumountable problem. Anyone who wants to help is set against an almost unending torrent of misery and exposed to the sordid underbelly of human desire. Not many come out of it with their compassion intact. But that is what is needed, not more hate.
Consider this a warning; keep posting AI slop and I'll have to put on my mod hat and punish you.
Boo. Boo. Boo. Your mod hat should be for keeping the forum civil, not winning arguments. In a huge content-filled human-written post, he merely linked to an example of a current AI talking about how it might Kill All Humans. It was an on-topic and relevant external reference (most of us here happen to like evidence, yanno?). He did nothing wrong.
And there are other medical organizations and groups that reached a different finding. Clearly there's a disagreement and we need more high quality research to settle things. And if people are going to be doing something anyway, why not study them?
But a lot of people are like you, so these models will start to get used everywhere, destroying quality like never before.
I can however imagine a future workflow where these models do basic tasks (answer emails, business operations, programming tickets) overseen by someone that can intervene if it messes up. But this won't end capitalism.
This conveys to me the strong implication that in the near term, models will make minimal improvements.
At the very beginning, he said that benchmarks are Goodharted and given too much weight. That's not a very controversial statement, I'm happy to say it has merit, but I can also say that these improvements are noticeable:
Metrics and statistics were supposed to be a tool that would aid in the interpretation of reality, not supercede it. Just because a salesman with some metrics claims that these models are better than butter does not make it true. Even if they manage to convince every single human alive.
You say:
Besides which, your logic cuts both ways. Rates of change are not constant. Moore's Law was a damn good guarantee of processors getting faster year over year... right until it wasn't, and it very likely never will be again. Maybe AI will keep improving fast enough, for long enough, that it really will become all it's hyped up to be within 5-10 years. But neither of us actually knows whether that's true, and your boundless optimism is every bit as misplaced as if I were to say it definitely won't happen.
I think that blindly extrapolating lines on the graph to infinity is as bad an error as thinking they must stop now. Both are mistakes, reversed stupidity isn't intelligence.
You can see me noting that the previous scaling laws no longer hold as strongly. The diminishing returns make scaling models to the size of GPT 4.5 using compute for just model parameters and training time on larger datasets not worth the investment.
Yet we've found a new scaling laws, test-time compute using reasoning and search which has started afresh and hasn't shown any sign of leveling out.
Moore's law was an observation of both increasing transistor/$ and also increasing transistor density.
The former metric hasn't budged, and newer nodes might be more expensive per transistors. Yet the density, and hence available compute, continues to improve. Newer computers are faster than older ones, and we occasionally get a sudden bump, for example, Apple and their M1
Note that the doubling time for Moore's law was revised multiple times. Right now, the transistor/unit area seems to double every 3-4 years. It's not fair to say the law is dead, but it's clearly struggling.
Am I certain that AI will continue to improve to superhuman levels? No. I don't think anybody is justified in saying that. I just think it's more likely than not.
- Diminishing returns!= negative returns.
- We've found new scaling regimes.
- The models that are out today were trained using data centers that are now outdated. Grok 3 used a mere fraction of the number of GPUs that xAI has, because they were still building out.
- Capex and research shows no signs of stopping. We went from a million dollar training run being considered ludicrously expensive to companies spending hundreds of millions. They've demonstrated every inclination to spend billions, and then tens of billions. The economy as a whole can support trillion dollar investments, assuming the incentive was there, and it seems to be. They're busy reopening nuclear plants just to meet power demands.
- All the AI skeptics were pointing out that we're running out of data. Alas, it turned out that synthetic data works fine, and models are bootstrapping.
- Model capabilities are often discontinuous. A self-driving car that is safe 99% of the time has few customers. GPT 3.5 was too unreliable for many use cases. You can't really predict with much certainty what new tasks a model is capable of based on extrapolating the reducing loss, which we can predict very well. Not that we're entirely helpless, look at the METR link I shared. The value proposition of a PhD level model is far greater than that of one as smart as a high school student.
- One of the tasks most focused upon is the ability to code and perform maths. Guess how AI models are made? Frontier labs like Anthropic have publicly said that a large fraction of the code they write is generated by their own models. That's a self-spinning fly-wheel. It's also one of the fields that has actually seen the most improvement, people should see how well GPT-4 compares to the current SOTA, it's not even close.
Standing where I am, seeing the straight line, I see no indication of it flattening out in the immediate future. Hundreds of billions of dollars and thousands of the world's brightest and best paid scientists and engineers are working on keeping it going. We are far from hitting the true constraints of cost, power, compute and data. Some of those constraints once thought critical don't even apply.
Let's go like 2 years without noticeable improvement before people start writing things off.
His description of having to hide who he really is, could nearly word for word be from a gay man in the not so distant past. That he is discriminated against from a black man in the 50s and 60s and so on and so forth.
And your analogy for it being poorly formed was poor.
I can’t buy one. Waymo operates it select zones of select municipalities. It’s not accessible.
Do you have a citation for this? Asking because it makes sense to me and I'd like to be able to use this while arguing lol.
I feel like there's a 'gonna die if you don't, gonna die if it doesn't work, not gonna die if it does' unstated exception to that particular tenet of the Nuremberg code.
Oh, are we still making Polack jokes?
This didn't really read like a joke, more like yet another in your long, long series of low-effort, derogatory and antagonistic posts.
Your record is sufficiently long that you're really asking for a permaban, but since I was persuaded last time that my response to you was too harsh, and this was, as shitty posts go, fairly mild, I'm banning you for a week. But you are already on strike four.
Ned Ludd led weavers to smash looms. It didn’t save the weaver’s jobs, but their great-granddaughters were far wealthier.
Just based on history, large productivity increases will raise wages. I’m looking into a cushy union sinecure that will never be automated but AI is a minor factor compared to the money. Yes, some fintech roles will be curtailed(and the remainder will be more client-and-customer heavy), but meh. These people’s high salaries is not fundamental to our social model.
Is your goal to doom us to cycles of repression?
Thus has it ever been. Thus is how it always will be.
The fundamental problem the Red Tribe/American conservatism faces is a culture of proud, resentful ignorance. They can't or won't produce knowledge and they distrust anyone who does. They don't want to become librarians or museum curators or anthropologists. The best they can manage is the occasional court historian or renegade economist, chosen more for partisan loyalty than academic achievement and quite likely to be a defector. The effect is this bizarre arrangement where rather than produce conservative thought, they are demanding liberals think conservative thoughts for them.
Occasionally rightists will plead weakness to rationalize their lack of intellectual productivity, but this is nonsense. They have had plenty of money, plenty of political power, and a broad base of support. Unless we accept the Trace-Hanania thesis that they literally just lack human capital, we're left with the conclusion that the right-wing withdrawal from intellectual spaces is a sort of distributed choice. Razing institutions because you can't be bothered to make your case is just barbarism.
More options
Context Copy link