site banner

Culture War Roundup for the week of January 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

What is administrative burden in research for?

I think about this in a variety of domains, but it came up again when one of my tech news aggregators pointed to this paper. The idea is using LLMs to generate and evaluate protocols for biology experiments. I think the obvious key concern is related to well-known tradeoffs that people have been brought up in other contexts. Sometimes, it gets reduced to, "Well, people were concerned that with automated spell-checkers, then people will forget how to spell, but that's a silly problem, because even if they forget how to spell, their output that is augmented by the spell-checker will be plenty productive."

I wonder if there are limits to this reasoning. I'm thinking of two topics that I recall Matt Levine writing about (I can't find links at the moment; since Money Stuff always has multiple topics in each letter and he's written about similar topics that use similar words a bunch of times, I can't quickly find them).

One topic I recall is him talking about 'defensive' board meetings. The way I recall it is to suppose that a company puts in their public disclosures that they "consider cybersecurity risks". This doesn't necessarily mean that they do anything about cybersecurity risks, but they have to consider them. The way this plays out is that the board has to put an agenda item for one of their meetings to talk about cybersecurity risks. For an hour or whatever, the board has to talk about the general topic of cybersecurity. This talking can be at a high level of generality, and they don't have to really decide to do anything specific, so long as they have the official minutes that say, in writing, that they "considered" it. Without this, they might be liable for securities fraud. With it, they still might be extremely vulnerable and eventually lose a bunch of money when they're exploited (since they just talked and didn't do anything), but at least when that happens, they won't also get hit with a shareholder suit for securities fraud. (Really, Matt Levine would say, they'll absolutely get hit with a shareholder suit for securities fraud, but they'll be able to point to the minutes to defend themselves.)

The second topic I recall is him talking about where the value lies in corporate contract negotiation. He said that most times, you just start from the "typical" contract. Maybe something you've used in the past. You just pull that old contract off the shelf, change some particulars, then put it forward as a starting point. Then, the negotiations are often about just little modifications, and the phrase, "That's standard," is a pretty solid weapon against any modifications. He then talked about how a firm that does these negotiations in bulk as a service can start to sneak new provisions in around the edges in some contracts, so that they can later point to those prior contracts and say, "That's standard." Having the ability to set the "default" can have value.

So, biology. Science. Writing protocols is complicated, annoying, and time-intensive. Scott has written before about how infuriating the IRB process can be. Even with just that, there were questions about what the IRB process is for, and whether the current level of scrutiny is too lax, too strict, or about right.

Applying LLMs will potentially greatly decrease the barrier for newer researchers (say, grad students) to be able to generate piles of administrative style paperwork, saying all the proper words about what is "supposed" to be done, checking off every box that the IRB or whatever would ask for. But I do have to wonder... will it lead to short-cutting? "Sure, the LLM told us that we needed to have these thirty pages of boilerplate text, so we submitted these thirty pages of boilerplate text, but I mean, who actually does all of that stuff?!" Do they even take the time to read the entirety of the document? I can't imagine they're going to pay as close attention as they might have if they had to painstakingly go through the process of figuring out what the requirements were and why they were necessary (or coming to the personal conclusion that it was a dumb requirement that was necessary for the sake of being necessary). At least if they went through the process, they have to think about it and consider what it was that they were planning to do. This could lead to even worse situations than a board "considering" cybersecurity; they don't even need meeting notes to demonstrate that they "considered" the details of the protocol appropriately; the protocol itself is the written document that they theoretically took things into consideration in an assumed-to-be serious way.

This could also entrench silly requirements. You need to provide the subjects with pencils instead of pens? "That's standard." Who is going to be able to do the yeoman's job of subtly shifting the default to something that's, I don't know, not stupid?

I imagine all sorts of dispositions by particular researchers. There are obviously current researchers who just don't give a damn about doing things the right way, even to the point of outright fraud. There are obviously current researchers who really do care about doing things the "right way", to the point of being so frustrated with how convoluted the "right way" can be that they just give up on the whole she-bang (a la Scott). Which factors become more common? What becomes the prevalent way of doing things, and what are the likely widespread failure modes? Mostly, I worry that it could make things worse in both directions: needing large piles of paper to check off every box will lead to both short-cutting by inferior researchers, possibly producing even more shit-tier research (if that problem wasn't bad enough already; also, since they have the official documents, maybe it'll be in a form that is even harder to discover and criticize) and warding off honest, intelligent would-be researchers like Scott.

I don't know. Lowering the barrier can obviously also have positive effects of helping new researchers just 'magically' get a protocol that actually does make sense, and they can get on with producing units of science when they otherwise would have been stuck with a shit-tier protocol... but will we have enough of that to overcome these other effects?

When writing formal letters in Japanese, there are a variety of extra steps you have to do above and beyond fancy salutations and signoffs, including - my favourite - the seasonal observations beginning the letter (e.g., in August you could say "The oppressive heat continues to linger") and closing it ("please give my regards to everyone"). These are so stereotyped that I think most recipients of letters regard them more as a structural element of the composition than a semantic one, just as in English we don't really think of the virtue of sincerity when reading "Yours Sincerely".

I think this is basically what LLMs will do to writing, at least on the 5-10 year time scale. Everything will be written by LLMs and interpreted and summarised by LLMs, and there will be a whole SEO-style set of best practices to ensure your messages get interpreted in the right way. This might even mean that sometimes when we inspect the actual first-order content of compositions created by LLMs that there are elements we find bizarre or nonsensical, that are there for the AI readers rather than the human ones.

To get back to your point, I absolutely think this is going to happen to bureaucracy in academia and beyond, and I think it's a wonderful thing, a process to be cherished. Right now, the bureaucratic class in education, government, and elsewhere exert a strongly negative influence on productivity, and they have absolutely no incentives to trim down red tape to put themselves out of jobs or reduce the amount of power they hold. This bureaucratic class is at the heart of cost disease, and I'm not exaggerating when I say that their continued unchecked growth is a civilisation-level threat to us.

In this regard, LLMs are absolutely wonderful. They allow anyone with limited training to meet bureaucratic standards with minimal effort. Better still, they can bloviate at such length that the bureaucracy will be forced to rely on LLMs to decode them, as noted above, so they lose most of the advantage that comes with being able to speak bureaucratese better than honest productive citizens. "God created men, ChatGPT made them equal."

If you're worried that this will lead to lax academic standards or shoddy research practices, I'd reassure you that academic standards have never been laxer and shoddy research is absolutely everywhere, and the existence of review boards and similar apparatchik-filled bodies does nothing to curb these. If anything, by preventing basic research being done by anything except those with insider connections and a taste for bureaucracy, they make the problem worse. Similarly, academia is decreasingly valuable for delivering basic research; the incentive structures have been too rotten for too long, and almost no-one produces content with actual value.

I'm actually quite excited about what LLMs mean in this regard. As we get closer to the point where LLMs can spontaneously generate 5000-10000 word pieces that make plodding but cogent arguments and engage meticulously with the existing literature, huge swathes of the academic journal industry will simply be unable to survive the epistemic anarchy of receiving vast numbers of such submissions, with no way to tell the AI-generated ones from the human ones. And in the softer social sciences, LLMs will make the harder bits - i.e., the statistics - much easier and more accessible. I imagine the vast majority of PhD theses that get completed in these fields in 2024 will make extensive use of ChatGPT.

All of these changes will force creative destruction on academia in ways that will be beautiful and painful to watch but will ultimately be constructive. This will force us to think afresh about what on earth Philosophy and History and Sociology departments are all for, and how we measure their success. We'll have to build new institutions that are designed to be ecologically compatible with LLMs and an endless sea of mediocre but passable content. Meanwhile I expect harder fields like biomed and material sciences to (continue to) be supercharged by the capabilities of ML, with the comparative ineffectiveness of institutional research being shown up by insights from DeepMind et al.. We have so, so much to look forward to.

In this regard, LLMs are absolutely wonderful. They allow anyone with limited training to meet bureaucratic standards with minimal effort. Better still, they can bloviate at such length that the bureaucracy will be forced to rely on LLMs to decode them, as noted above, so they lose most of the advantage that comes with being able to speak bureaucratese better than honest productive citizens. "God created men, ChatGPT made them equal."

Hahaha.

Meanwhile, in reality, bureaucracy will rise with the amount that bureaucrats can process and still leave at 5 pm sharp. AI notably increases that amount.

If you're worried that this will lead to lax academic standards or shoddy research practices, I'd reassure you that academic standards have never been laxer and shoddy research is absolutely everywhere, and the existence of review boards and similar apparatchik-filled bodies does nothing to curb these.

I have a mild anecdotal counterpoint. It's old at this point, since I haven't worked in science in over a decade, but when I did, I was on my organization's institutional animal care and use committee, and despite the bureaucratic jargon and process, we actually did do something to curb some of the more pointless uses of research animals. The group wasn't particularly adversarial and worked with researchers on questions like whether the statistical power was going to be sufficient (we don't want to kill animals if the study won't even give a result), whether it could be done with fewer animals (same, but reversed), and whether the protocol used all reasonable practices to reduce pain and suffering of the animals (e.g. if the end point is death from an infection, can we just do infection instead, since the animal will tend to die painfully?).

I'm sure many groups feel that they're doing something constructive despite just being an annoying bureaucracy, and I'm sure that the review process we were doing was both imperfect and tedious, but I do want to offer that gentle pushback against it being literally useless.

If you're worried that this will lead to lax academic standards or shoddy research practices, I'd reassure you that academic standards have never been laxer and shoddy research is absolutely everywhere, and the existence of review boards and similar apparatchik-filled bodies does nothing to curb these. If anything, by preventing basic research being done by anything except those with insider connections and a taste for bureaucracy, they make the problem worse. Similarly, academia is decreasingly valuable for delivering basic research; the incentive structures have been too rotten for too long, and almost no-one produces content with actual value.

Whew, lots of thoughts. Let's start with total agreement that academic standards have never been laxer, that shoddy research is absolutely everywhere, review boards and such have done nothing to curb it, and that almost no one produces content with actual value. Moreover, I would agree that the incentive structures have been too rotten for too long, and I think that this is a huge driver of the previous four items. "Publish or perish" has creeped ever earlier, and I've actually stopped going to conferences, due to the flood of interestingly-titled talks that end up being, "So, I'm an undergrad, and this is really preliminary work, and... [total garbage]." The undergrads feel like they have to publish bullshit in order to get into grad school, the grad students feel like they have to publish bullshit to get a post-doc, the post-docs feel like they have to publish bullshit to get a professorship, and the assistant professors feel like they have to publish bullshit to get tenure (after tenure, paths tend to bifurcate a bit more, it seems), so the assistant professors are more than happy to push everyone down the chain to go ahead and publish their bullshit (so long as his/her name is on it, so it adds to a count on Google Scholar). It is all for the sake of number go up rather than advancing knowledge.

This trend has been decades long, but I would argue that it has also been exacerbated by one particular huge drop in barrier to submission - the rise of China. I don't know if they've really subscribed to our fucked up incentive structure, because I just don't know as much about how their unis work, but the regional flood has gone global. Not to say that there is zero good work coming from there (I finally recommended acceptance of my first paper from a Chinese group, but it's unsurprising that it was an exceptional case, as the main guy in that group is good enough that he's now taken a position at an excellent Western uni), but the quantity of folks pushing out the quantity of digital ink from there is astounding, and the vast majority of it is undergrad-tier bullshit. It really makes me pause when you say:

If anything, by preventing basic research being done by anything except those with insider connections and a taste for bureaucracy, they make the problem worse.

I really go back and forth in my head. Does having a sort of mental rule that nearly automatically rules out all Chinese work help? Probably so, like 99% of the time. Does having a sort of mental rule that if the first author is an undergrad from a low-tier uni, it's probably shit help? Probably so, like 99% of the time. I joke sometimes that I've never even seen a contemporary masters thesis that was at all interesting (there are some legendary ones from the past, by legit giants in their fields). Having some form of statistically-informed heuristic realllly saves a lot of time and effort that would otherwise be 99% wasted. This sort of "credential chauvinism" obviously won't stop the flood; incentive structures for conferences/journals are also fucked up enough that there's zero chance any will adopt a position of basically, "If you're an undergrad or from a Chinese university, your submission will basically be auto-denied." But they're adopted very pragmatically, almost out of necessity, by the good researchers who just don't have that much time to waste. (I know the irony of writing this in a bloody comment on a rando internet forum.)

My personal strategy is basically defection/free-riding. I've cultivated a network of really talented profs who I personally know, have personally spoken with enough to know how they think, with a not-insignificant percentage of them being assistant profs. They basically feel forced by their incentive structures to wade through all of the crap and constantly engage with all the conferences and reviewing and editing and shit... and they like magically filter through it all and bring up the small number of diamonds in the rough. Is it the best-tuned filter? Possibly not. Might I gain some additional insight by wading through more of it actively? Possibly. But damn if I'm going to ever feel like the cost/benefit tradeoff is going to be worth it anytime soon. But the nature of defection/free-riding is that not everyone can do it without catastrophic consequences.

Getting back to the point of LLMs, this is what I'm worried about. Sure, it'll make it harder for dyed-in-the-wool bureaucrat-and-nothing-more folks, but it'll also make it harder for us. It'll be Eternal September for academia. What possible filters can stand?

Meanwhile I expect harder fields like biomed and material sciences to (continue to) be supercharged by the capabilities of ML, with the comparative ineffectiveness of institutional research being shown up by insights from DeepMind et al.

I don't have as much to say on the social sciences bit, and you may well be perfectly on point there. For harder sciences, I'm really not sure where this will go. Traditional work has been extremely structured in form, whereas I do subscribe to the Nick Weaver School of ML in Research, which is, "ML is great for when you want to model something that you have a good reason to think is structured, but you have no idea how to model it, and you're okay with it being fabulously wrong some percentage of the time." Biomed and materials science are perfectly positioned to reap the gains of this. Those areas in particular have fantastically complicated underlying structures, and at least the experimental folks don't much care how we get a half-decent idea of what to try to build, so long as each iteration doesn't take too much time, we can just try a bunch of them and see what works out. Huge potential for big experimental gains, and a decent chance that experimental gains will subsequently push the theory part forward around those new lodestones. My view of the trends in institutional research has been that they've gone full steam ahead at trying to embrace it for every problem under the sun, even when it doesn't make much sense. But for every work that gets accelerated, every wonder material that gets developed, how many shit-tier "ML" papers will be submitted/published in the field that turn out to just be awful, obnoxious noise? (While this last paragraph is similar in a concern about the flood of crap, it's a bit distinct as it's less focused specifically on the administrative side of the crap production/evaluation.)

I'm not exaggerating when I say that their continued unchecked growth is a civilisation-level threat to us.

Agreed. I know it sounds overly dramatic, but this hard press on the brakes is bringing us down.

If you're worried that this will lead to lax academic standards or shoddy research practices, I'd reassure you that academic standards have never been laxer and shoddy research is absolutely everywhere, and the existence of review boards and similar apparatchik-filled bodies does nothing to curb these.

I'm well aware, and that's not what I'm worried about at all.

I know a bunch of social workers, some of them a generation or so older than me, and I heard a few stories of how things used to run. Like nowadays there was actual work, and there was a bunch of bureaucratic stuff to deal with. Back in the day you had to type it all out on actual paper, mail it etc. Then everybody and their dog started using computers for everything, everything got digitized, hundreds of apps meant to automate the drudgery got deployed, you could instantly send documents via e-mail... do you want to take a wild guess in which of these eras people spent more time doing actual work than they did dealing with bureaucratic nonsense?

I worry the same will happen with AI. No, it will not "make us equal", it's a side-rant but I'm shocked anyone could even utter such a sentiment with a straight face. There are, and there have always been, entire institutions devoted to the task of ensuring this will never happen. What will happen is that you will need AI to even keep up. You will need an AI text generator to output sheer amount of text you will now be required to write in order to cover your ass, and you will need an AI summarizer to "read" the tonnes upon tonnes of paper that will be sent your way. The best part is that all of this will be centralized in the hands of a few companies, who's owners hang out at the same cocktail parties as various panopticon fetishists at the top of our society, who will dictate how exactly this AI needs to be lobotomized to only output goodthink. It will now be the perfect tool for them to "nudge" us, old geezers like you or me might remember a world where you needed to process information yourself, but children born in the new one will only ever know information summarized to them via AI.

I've mentioned this before, but this is what drives me up the wall with AI-optimists. I know it's hard to learn the lessons of history, but this doesn't even count as history. We literally just watched Big Tech bitch-slap the ever-loving hell out utopian tech-nerds like 5 minutes ago, and I'm now supposed to jump on the next bandwagon that is going to "make us equal"? Give me a break.

I worry the same will happen with AI. No, it will not "make us equal", it's a side-rant but I'm shocked anyone could even utter such a sentiment with a straight face.

That was my first thought as well. Like just how disconnected from the reality of day-to-day work does someone have to be for this prediction to make a lick of sense? It genuinely boggles the mind.

As I understand it, this is already a standard tactic used by large law firms to crush individual lawyers. They don't need LLM, they just hire a ton of new lawyers to churn out vast amounts of legal documents. A single lawyer trying to fight a lawsuit against them would get buried, because he just can't physically read all of that stuff and respond in any human lifetime, and if he can't respond he loses by default.

Big corporations also do this as a defense mechanism. So you want to sue them, and they're required to turn over the relevant docs? Oh they'll do that... but the "relevant docs" are like a million pages of garbage. Again, only a giant law firm has the resources to actually read through all of it and process it effectively. DDOS via human bureaucracy.

I share your pessimism. I think of it this way: the amount of time it takes to do anything increases with the amount of time available; the amount of admin that a bureaucracy requires of you increases with the amount they think you can do.

So LLM increases in people's capacity to handle admin work will result in an increase in the amount of required admin. "Of course you can complete this form as well; you can use an LLM to help you..."

Right now, the bureaucratic class in education, government, and elsewhere exert a strongly negative influence on productivity, and they have absolutely no incentives to trim down red tape to put themselves out of jobs or reduce the amount of power they hold.

I think this is a misdiagnosis of whats going on. On an individual level where someones work is individually threatened by the removal of some administrative or bureaucratic requirement they might oppose it's removal but at the same time there's at least ten people in the same organization that is burdened by said requirement and dreams of it being removed. Also, even if the requirement is removed they're unlikely to be fired, they'll be reassigned and hiring will be reduced.

It is not the administrative state that is driving this, it's the regulatory state, mostly because they don't actually have to deal with the consequences of the regulations they create. It is good to pass regulation because you're then seen as "doing something" and the costs are too diffuse to be tracked back to you by the electorate. There is noone that can say no so the regulatory complexity grows and with it cost disease. It's kind of like a spaghetti code base that is never refactored because people with power have no incentive to do so and the code will never actually fail to execute, only work worse and require more compute.

That said, opinions of a department wholly concerned with dealing with some bureaucratic hurdle is suspect when asked about the value of said regulation. But again, these are usually small parts of the overall organisation that are concerned with something else of concrete value.

It is possible that llms will help but I could see it being a wash because it also enables bad behaviour which necessitates further regulation and enforcement.

As we get closer to the point where LLMs can spontaneously generate 5000-10000 word pieces that make plodding but cogent arguments and engage meticulously with the existing literature, huge swathes of the academic journal industry will simply be unable to survive

I think you're wrong about this being a good thing. Currently, all the best journals in most allow anyone to submit. Sometimes you get people outside "the cathedral" getting really novel ideas published and changing fields. Once it becomes too easy for hoi palloi to submit, journal editors will start relying more and more on the author's credentials. Not from Harvard/Yale/Oxbridge? Then you're totally out of luck.

This is a double edged sword. Further gaming of the system means more control over the existing institutions, but it also means more numerous, competent and motivated counter elites.

Some people seem to think Harvard can't purity spiral itself into irrelevance because it's so inextricably tied with the existing power structure. But power structures are not eternal laws of nature.

We've seen this with finance, the level of control there is pretty grandiose (the whole idea of an "accredited investor" is ridiculous to say nothing of AML etc) but that's just put a giant prize on building alternatives. And people did and are.

An "accredited investor" is just someone upper middle class or richer. (Individual income >$200k, household income >$300k, or household net worth ex home equity >$1m all qualify with no paperwork). I don't think it is a good rule, but it doesn't constitute a grandiose level of control. Given the purpose of the rule (that an accredited investor is someone rich enough that the median voter will point and laugh rather than sympathizing if they get scammed), the limits are arguably too low. Selling bad investments to dentists is not pro-social, and if it was illegal the people currently doing it would probably find something better to do.

I think the idea the State can just come in and tell you that you're too stupid to use your own money correctly is grandiose in itself. But I'm more pointing generally at the category of financial credentials. The more esoteric the financial product the more weird hoops you usually have to go through to trade it with any reasonable liquidity. And the more hoops, the more opportunities for exception, and therefore power.

the whole idea of an "accredited investor" is ridiculous

I'm pretty sure the idea of "accredited investor" is really defining "investors the SEC allows to do extra risky stuff because the median voter will at best laugh when they lose their shirts." It's a glorified CYA measure so the government can claim "freedom" exists without the bureaucrats getting hauled in front of an angry Congress asking why their sympathetic constituents (retired teachers, etc) lost money.

That's a fair assessment, but it's part of a large swath of similar credentials that are meant to split the market between people that know what they're doing and stupid amateurs. In a way that effectively removes both risk and opportunity from you if you're not willing to jump through hoops.

There's a ton of opportunity in investing in startups early for instance, and a ton of money both to win and to lose (mostly to lose), but protecting your average wagie from getting swindled also means he's never going to make it big. And as with any bureaucratic control, there's discretion to make exceptions, which means there's power.

We've seen this with finance, the level of control there is pretty grandiose (the whole idea of an "accredited investor" is ridiculous to say nothing of AML etc) but that's just put a giant prize on building alternatives. And people did and are.

All being an accredited investor does is make you a target for more scams. That includes those "alternatives", assuming you mean various crypto-related investing efforts (aside from just investing in the coins, not all of which are scams)

Harvard can't purity spiral-itself into irrelevance because it's inextricably tied to the entire leftist power structure. Which is too big and self-reinforcing to fail.

Why is the scientific world always conspiring to turn into Warhammer 40k's Mechanicus? Keepers of all powerful artefacts that nobody has any idea how to fix anymore because everyone was too busy working the bleeding edge of minutia to write most of the basic stuff down.

I need to flesh this out at some point but it's bothered me for a while that so much of the ressources of science are seemingly only dedicated to new and revolutionary insight and barely anything at all is spent on making sure what we're discovering fits all together in a way that's humanely comprehensible or even has any degree of truth that can be verified.

But yeah let's just start forgetting how to even discover stuff and outsource that to machines, what could possibly go wrong.

We're living in a meme and it's not even a good one.

Why is the scientific world always conspiring to turn into Warhammer 40k's Mechanicus? Keepers of all powerful artefacts that nobody has any idea how to fix anymore because everyone was too busy working the bleeding edge of minutia to write most of the basic stuff down.

Because 40K was originally written as satire and in order to fulfill their roll well a satirists must have a far deeper and more complete understanding of humanity than any "social scientist".

A core conceit of the liberal academic mindset is that basic competence doesn't matter. That it's somehow beneath them. And I think that's gonna bite them in the ass and I think the satirists at GW either noticed the same trends or could extrapolate from the exaggerated stereotype/strawman

and I think the satirists at GW either noticed the same trends or could extrapolate from the exaggerated stereotype/strawman

Well whether it was satire originally is up for debate actually! It was medieval society in space. In other words the Adeptus Mechanicus is what you get if Catholic monks takes over future science. They are very competent at getting things to work, but they make everything into doctrine, have heretics and schisms (but see Flanderization below).

Rick Priestley was a Classics and Ancient History graduate and states that 40K is what happens if a medieval society was a spacefaring one.

"Possibly the biggest influence is history rather than fiction though - actual religious practices and beliefs. I downplayed that aspect of it all when I was at GW, because you wouldn't want to be seen to make-light of people religious belief"

"But I have in the past pointed out the parallels between Christian mythology and the 40K background - with The Emperor as the 'sacrifical god' whose suffering redeems mankind (some other religions have this idea of the 'sacrificial god or king' - there is a lot of this in Frazer's Golden Bough, of course, and also in The White Goddess by Robert Graves should you be interested in such things). The concept of sacrifice within religion is very common - and it has a lot of resonance within Christianity - and the Emperor in 40K has taken on the Christ-like role - with the dual identity as 'dead' and 'eternal god' (though impossible to know that of course - but people have faith and faith alone is enough to sustain the universe) and with Horus cast into the roll of Satan. The original description of the Horus Heresy (Chapter Approved I think) is actually a fairly obvious rewrite of the war in heaven and casting out of the fallen angels - with Space Marines as 'angels' a theme which persists even to this day, I believe."

"that the mystical, pseudo-religious stuff just overwhelmed what science I actualy put into the original game."

" I don't think it was anything specific. It goes back to stuff like Edgar Rice Burroughs (Barsoom) and was a common theme on TV with things like the first Star Trek and Dr Who - where you had a kind of technician/wizard ruling class - Eloi and Morlocks even with HG Wells - so I think treating technical or scientific knowledge in that revered, practically religious, way wasn't such a leap really."

"Well - I coined the phrase in - I think - the Book of the Astronomican in terms of The Horus Heresy - although it's possible we'd described things as heretical before that. It's just part of the pseudo-religious nature of the background - I don't think the word has a different meaning in 40K than the real world - it just suggests sectarian disputation and the sort of controversies that created the Great Schism, the Albigensian Heresy, and endless similar nonsense in the real world. I don't think that contemporaries of the 'Horus Heresy' would have called it that - it's a retrospective name - but of course GW couldn't cope with that kind of concept - they portray a consistent mind-set across ten thousand years of history... which of course is another nonsense 🙂"

"BIFFORD: Is the Imperium of Man supposed to be an indictment of religion?

PRIESTLEY: That wasn't the intent! It's a dystopian future in which people believe crazy stuff because not to do so would would bring society (and humanity) tumbling abut its ears - so the various institutions of the Imperium are massively invested in things that may or may not be true... I just gave those things a pseudo-religious context because it's an obvious parallel with religious schisms during the European Reformation."

BIFFORD: Oh? What "crazy beliefs" are you referring to exactly? And how are they essentially to society's survival?

PRIESTLEY: That the Emperor is a 'god' that he is capable of expressing his will in some material fashion - that the institutions of the Imperium are divinely directed - that they are working to the same end - and (this has tended to vanish over the years) that ancient technologies are activated or controlled by magic or inhabited by spirits, that ritual tasks have magical power... for example... I once wrote a piece that we didn't use in which a subterranean worker in the Emperor's palace had the job of replacing all the light bulbs as they stopped working - but over the years the supply of light bulbs ran out - but the job still existed and was inherited generation to generation - but it had evolved into painting all the dud bulbs white so they looked like they might work - it had become a ritual, extending over centuries, that had accumulated shamanic significance within the underworld of the palace - but was ultimately... nonsense! Within that society our bulb painter has a role and respect, and the society has cohesion - albeit a bit crazy."

"At a time when most people didn’t go to college we were all graduates – Phil Gallagher studied Russian at Cambridge – and both me and Graeme (and Nigel Stillman for that matter) had studied archaeology so we brought a lot of broad cultural and historical references into our worlds."

Rick Priestley reworked Rogue Trader which was an idea he had before because it was part of the deal for him working on other things. Neither he nor Brian Ansell were aiming for a satire at that point. He specifically points out he wasn't trying to make light of people's beliefs.

The reason it has become a satire is because it was then developed by people later who would only see such a "backwards" future as anything else but a satire of religious zealots and fascism. But Priestley did not envisage it as such. And indeed he can't bring himself to play or interact with 40K nowadays because it has drifted so far from his original vision. In his version, aliens worked alongside mankind and it was much less xenophobic and much less grim dark. Indeed the original creators were almost all parts of the liberal academia you talk about, and were proud of it. A lot of the satire there is was inherited due to the fact the Priestly was told he had to put in rules so that 2000AD, Rogue Trooper and Nemesis the Warlock minis/ideas (GW properties at the time) could be used. The Adeptus Mechanicus was very competent in its initial state, it has been (as the whole setting has been) Flanderized over the years.

I think it is pretty clear though from its history, the one thing it is not is a satire of academia. It was reworked to be a satire of religion and fascism. Though how satirical is is has waxed and waned over the decades. Priestley's initial intention was basically just what if you put Medieval Europe into space. How would that look? What if Benedictine monks were the scientists? What if knightly orders were angelic super-soldiers. What if God was rebelled against by his creations in such a place? What if there were also Space elves and dwarves and orks? And also Judge Dredd? and the Inquisition? What if I took almost every Christian medieval trope and just "bunged it in" (his own words). Look at his words above, and his other interviews.

Priestley et al were historical academic wargaming nerds and probably did not write 40K as a satire as such, but it inherited some satire from 2000AD and was then interpreted as such entirely by following writers as GW became a big business. Priestley's initial conception of the Imperium is much closer to a homage than a satire. And Ansell's initial conception of the Chaos Gods was much more nuanced than them being evil. It was quite possible to a good heroic Chaos worshipper, with all of them representing both the negative and positive emotions within humanities collective unconscious.

The original 40K - Rogue Trader universe was not much of a satire at all, it was Priestley (primarily) homaging his passionate interests (history, wargaming/war, roleplaying, science fiction etc.) into one big dystopian, but nuanced world. Now of course I am not sure 40K even understands the word nuance. But there we go.

https://warhammer40k.fandom.com/wiki/Birmingham

Birmingham is known as the Black Planet because it receives almost no visible light from its system's sun. As a result, the world receives few visitors from the wider Imperium, and its inhabitants have become linguistically and culturally isolated. Its technology is primitive and pre-industrial compared to the rest of the Imperium. For instance, the favoured weapon among the natives is still the black powder musket.

You know what, I agree that 40k isn't satire, just history with the serial numbers filed off.

Ahh now Birmingham isn't that bad. Stoke-on-Trent on the other hand..

It doesn't even merit a wiki entry in 40k, so I can only imagine the abyssal horrors that dwell there.

Well the joke is, when they filmed a zombie apocalypse movie in Stoke-on-Trent, they didn't need to make any changes to either the area or the citizens.

But honestly it's not that bad. Pretty similar to most kind of hollowed out ex- manufacturing towns. But with oatcakes and people calling you "duck".

I was not aware of any of this having only gotten into Warhammer in the early-mid 2000s. TIL

Then you are one of todays (or yesterdays) 10,000! To be fair, GW changed pretty quickly in 90 or 91, with much of the original personnel being sidelined or leaving entirely. And they definitely do now officially themselves say the Imperium is to be seen satirically.

Brian Ansell

Apparently he just died December 30th. https://www.polygon.com/24023687/bryan-ansell-warhammer-co-creator-obituary

I saw! I met him a time or two back in the day (Priestley more so). The nerdy wargaming scene in the Midlands back in the 80's was...pretty small. I recall he was a nice guy, though very intense.