@self_made_human's banner p

self_made_human

Kai su, teknon?

16 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


				

User ID: 454

self_made_human

Kai su, teknon?

16 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


					

User ID: 454

This happens because while the model has been programmed by some clever sod to apologize when told that it is wrong, it doesn't actually have a concept of "right" or "wrong". Just tokens with with different correlation scores

This is wrong! It would have been a reasonable claim to make a few years back, but we know for a fact this isn't true now:

https://www.anthropic.com/research/tracing-thoughts-language-model

It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.

There was other relevant work which shows that models, if asked if they're hallucinating, can usually find such errors. They very much have an idea of true versus false, to deny that would be to deny the same for humans, since we ourselves confabulate or can be plain old wrong.

Unless you explicitly tell/programm it to exclude the specific mistake/mistakes that it made from future itterations (a feature typically unavailable in current LLMs without a premium account) it will not only continue to make but "double down" on those mistakes because whatever most correlates with the training data must, by definition, be correct.

Gemini 2.5 Pro Thinking in particular is far more amenable to reason. It doesn't normally double down and will accept correction. At least ChatGPT has the option to add memories about the user, so you can save preferences or tell it to act differently.

I'm slightly disappointed to catch it hallucinating, which is why I went to this much trouble instead of just accepting that as a fact the moment someone contested it. It's still well ahead of the rest.

Ah... I get it now. Thank you! I'm disappointed to see hallucination and confabulation here, but it you're inclined, do keep trying out Gemini 2.5 Pro Thinking in particular. It's a good model.

Hmm. I think that's likely because my prompt heavily encouraged it to reason and calculate from first principles. It's a good thing that it noted that those attempts didn't align with pre-existing knowledge, and accurately recalled the relevant values, which must be a nigh-negligible amount of the training data.

At the end of the day, what matters is whether the model outputs the correct answer. It doesn't particularly matter to the end user if it came up with everything de-novo, remembered the correct answer, or looked it up. I'm not saying this can't matter at all, but if you asked me or 99.999% of the population to start off trying to answer this problem from memory, we'd be rather screwed.

Thanks for the suggestion and looking through the answer, I've personally run up to the limits of my own competence, and there are few things I can ask an LLM to do that I can't, while still verifying the answer myself.

  1. LessWrong lead the charge on even considering the possibility of AI going badly, and that this was a concern to be taken seriously. The raison d'être for both OpenAI (initially founded as a non-profit to safely develop AGI) and especially Anthropic (founded by former OpenAI leaders explicitly concerned about the safety trajectory of large AI models). The idea that AGI is plausible, potentially near, and extremely dangerous was a core tenet that in those circles.

  2. Anthropic in particular is basically Rats/EAs, the company. Dario himself, Chris Olah, a whole bunch of others.

  3. OAI's initial foundation as a non-profit was using funds from Open Philanthropy, an EA/Rat charitable foundation. They received about $30 million, which meant something in the field of AI back in the ancient days of 2017. SBF, notorious as he is, was at the very least a self-proclaimed EA and invested a large sum in Anthropic. Dustin Moskovitz, the primary funder for Open Phil, lead initial investment into Anthropic. Anthropic President Daniela Amodei is married to former Open Philanthropy CEO Holden Karnofsky; Anthropic CEO Dario Amodei is her brother and was previously an advisor to Open Phil.

As for Open Phil itself, the best way to summarize is: Rationalist Community -> Influenced -> Effective Altruism Movement -> Directly Inspired/Created -> GiveWell & Good Ventures Partnership -> Became -> Open Philanthropy.

Note that I'm not claiming that Rationalists deserve all the credit for modern AI. Yet a claim that the link between them is as tenuous as that between ice cream and drowning is farcical. Any study of the aetiogenesis of the field that ignores Rat influence is fatally flawed.

I copied your comment, and it insisted it was correct. I then shared the image, and it seems to think that the issue is imprecise terminology on its part rather than an actual error.

Here's the initial response:

https://rentry.org/yzvh9n47

After putting the image in:

https://rentry.org/c6nrs385

The important bit:

The proof never claims $r_i = n^{k-i-1}$. It uses $r_i = n^{k-i-1} \pmod{2^{i+1}}$ and the derived property $r_i \ge 1$.

Conclusion: The confusion likely arises from either the slightly ambiguous notation in the highlighted sentence (which should explicitly state "fractional part of ... is ...") or a misreading of the later step where the lower bound $r_i \ge 1$ is applied. The mathematical logic itself appears sound.

Thank you.

What do you mean by "reference NIST"? I think I've already mentioned that despite its internal chain of thought claiming to reference NIST or "look up" sources, it's not actually doing that. It had no access to the internet. I bet that's an artifact of the way it was trained, and regardless, the COT, while useful, isn't a perfect rendition of inner cognition. When challenged, it apologizes for misleading the user, and says that it was a loose way of saying that it was wracking its brains and trying to find the answer in the enormous amount of latent knowledge it possesses.

I also find it very interesting that the model that couldn't use code to run its calculations got a very similar answer. It did an enormous amount of algebra and arithmetic, and there was every opportunity for hallucinations or errors to kick in.

I don't follow the AI developments terribly closely, and I'm probably missing a few IQ points to be able to read all the latest papers on the subjects like Dase does, so I could be misremembering / misunderstanding something, but from what I heard capital 'R' Rationalism has had very little to do with it, beyond maybe inspiring some of the actual researchers and business leaders.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

The gist of it is that the founders and early joiners of the big AI labs were strongly motivated by their beliefs in the feasibility of creating superhuman AGI, and also their concern that there would be a far worse outcome if someone else, who wasn't as keyed into concerns about misalignment was the first to go through.

As for building god, I think I heard that story before, and I believe it's proper ending involves striking the GPU cluster with a warhammer, followed by several strikes with a shortsword. Memes aside, it's a horrible idea, and if it's successful it will inevitably be used to enslave us

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

Yud had a whole institute devoted to studying AI, and he came up with nothing practical. From what I heard, the way the current batch of AIs work has nothing to do it with what he was predicting, he just went "ah yes, this is exactly what I've been talking about all these years" after the fact.

Yudkowsky is still more correct than 99.9999% of the global population. He did better than most computer scientists and the few ML researchers around then. He correctly pointed out that you couldn't just expect that a machine intelligence would come out following human values (he also said that it would understand them very well, it just wouldn't care, it's not a malicious or naive genie). Was he right about the specifics, such as neural networks and the Transformer architecture that blew this wide open? He didn't even consider them, but almost nobody really did, until they began to unexpectedly show promise.

I repeat, just predicting that AI would reach near-human intelligence (not that they're not already superintelligent in narrow domains) before modern ML is a big deal. He's on track when it comes to being right that they won't stop there, human parity is not some impossible barrier to breach. Even things like recursive self-improvement are borne out by things like synthetic data and teacher-student distillation actually working well.

In any case when I bring up rationalism's failure, I usually mean it's broader promises of transcending tribalism, systematized winning, raising the sanity waterline, and making sense of the world. In all of these, it has failed utterly.

Anyone who does really well in a consistent manner is being rational in a way that matters. There are plenty of superforecasters and Quant nerds who make bank on being smarter and more rational given available information than the rest of us. They just don't write as many blog posts. They're still applying the same principles.

Making sense of the world? The world makes pretty good sense all considered.

It makes sense, because my feelings toward rationalism and transhumanism are quite similar. Irreconcilable value differences are irreconcilable, though funnily enough mist transhumanists, yourself included, seem like decent blokes.

Goes both ways. I'm sure you're someone I can talk to over a beer, even if we vehemently disagree on values.

(The precise phrase "irreconcilable values difference" is a Rationalist one, it's in the very air we breathe, we've adopted their lingo)

I asked Gemini 2.5 Pro Thinking to solve it. It claimed to have a solution. I asked for the most concise summary it could provide'

Okay, here's a concise summary of the proof, avoiding technical jargon:

The Problem: Imagine you have a way of writing numbers that isn't based on powers of 10 (like our usual 1s, 10s, 100s), but on powers of $2n$, where $n$ is some odd number (like powers of 6, if $n=3$, or powers of 10, if $n=5$). We look at the number $n^k$ ($n$ multiplied by itself $k$ times). We want to show that if you pick $n$ to be a really large odd number, then when you write $n^k$ in this "base $2n$" system, all the "digits" you use will be bigger than some target value $d$.

How the Proof Works:

  1. Figuring out the Digits: The proof first finds a way to calculate the digits used to write $n^k$ in the base $2n$ system. Let's call the digits $c_0, c_1, c_2, \dots$.
  1. The Last Digit ($c_0$): It turns out the very last digit ($c_0$, the "ones" place in this system) is simply $n$ itself. So, to make this digit bigger than $d$, we just need $n$ to be bigger than $d$.
  1. The Other Digits: The proof finds a formula for the other digits ($c_1, c_2,$ etc.). It shows that each digit $c_i$ is calculated by taking $n$, multiplying it by some specific odd number (which depends on $k$ and which digit it is), dividing by a power of 2 (like 2, 4, 8, ...), and then rounding down.
  1. Making Digits Large: Since the odd number multiplier is at least 1, each digit $c_i$ is at least roughly $n$ divided by some power of 2 (like $n/1, n/2, n/4, \dots, n/2^{k-1}$).
  1. The Threshold: If we want all digits to be bigger than $d$, we need even the smallest of these estimates (which involves dividing $n$ by the largest power of 2, $2^{k-1}$) to be bigger than $d$. The proof calculates that if $n$ is larger than a specific value $N$ (which depends on $k$ and $d$, specifically $N = 2^{k-1}(d+1)$), then every digit is guaranteed to be greater than $d$.
  1. Conclusion: Therefore, such a threshold $N$ exists. Pick any odd number $n$ larger than this $N$, and all the digits of $n^k$ in base $2n$ will be larger than $d$.

Was it Induction?

The main argument of the proof is not primarily by induction. It's a direct proof that calculates the digits and then finds a condition on $n$ (making it large enough) to satisfy the requirement.

However, induction was used for one small intermediate step: proving that $n^k$ always leaves a remainder of $n$ when divided by $2n$ (which showed that the last digit $c_0$ is indeed $n$). But the overall structure connecting the size of $n$ to the size of all digits didn't rely on induction.

Here's the raw answer (minus reasoning trace):

https://rentry.org/5s6q6nxe

Absolutely. It's the revealed preference of many a child to labor away in the virtual caves of Minecraft. What other choice do they have, when adults are so dull and near-sighted? Hook them up with a VR setup that controls real diggers! Send them to the mines, that's what the minors miners yearn for.

When most people use the term "utilitarianism", they're talking about the Benthamian or Springer notion. This is a mistake I've made myself, having argued with some poor guy on the old Motte where I claimed that since I have a utility function, I'm therefore utilitarian. I've learned from that error.

My understanding is that most humans aren't VNM rational! They violate one or more of the different requirements, in the sense that their preferences can be contradictory. An example is the Allais Paradox. I don't know if any human is actually VNM rational, but I don't think that's necessarily impossible for someone who is good at meta-cognition and math.

Note that I'm not disagreeing with Yudkowsky here, I was aiming to demonstrate that @Primaprimaprima 's (implicit, by my understanding) claim that not being a utilitarian disqualified him from being a "Yudkowskian Rationalist".

As an aside, this is where I most differ from Yudkowsky on the current race to AGI: he seems to think we're now extra-doomed because we don't even fully understand the AIs we're creating; I think we're now fractionally-doomed for the same reason. The contrapositive of "a utility function simple enough to understand is unsafe" is "a safe utility function is something we won't fully understand". I don't know if stochastic descent + fine-tuning for consistency will actually derive a tolerably human value system starting from human text/audio/video corpuses, but it's at least possible.

I disagree with Yud on this myself. My p(doom) has gone down from a max of 70% to a far less concerning 20% these days. Our alignment techniques, while imperfect, produce LLMs which are remarkably in-sync with the goals and desires of their creators (and to a lesser extent, their users). Anthropic is doing excellent mechanical interpretability work, such as recent studies into how Claude actually thinks (it's not just predicting the next token, it backtracks and "thinks ahead). They're not entirely black boxes, as was feared to be the case before modern LLMs arrived.

It's also remarkable that RLHF works, and I'm confident that Yudkowsky was surprised by this, even if his priors didn't update that much (I recall a Twitter post along these lines). I was surprised, I remember thinking, holy shit, this works??

Note that just because a model is aligned with its creators/users, that doesn't mean that it's aligned with me. Consider the possibility that a Chinese AGI follows orders exactly understanding the CCP's intent, but said orders are to permanently disempower all non-Chinese and wrest control of the light cone (casualties are acceptable).

I will note that Gemini 2.0 Flash and GPT-4o are significantly behind the SOTA! The latter got a very recent update that made it the second best model on LM Arena, but they're both decidedly inferior in reasoning tasks compared to o1, o3 or Gemini 2.5 Pro Thinking. (Many caveats apply, since o1 and o3 have different sub-models and reasoning levels)

I asked two instances of Gemini 2.5 Pro:

Number 1:

What is the decay time for the 3p-1s transition in hydrogen? Make sure you are certain about your answer, after doing the relevant calculations.

Final answer: 5.27 ns

Second iteration:

What is the decay time for the 3p-1s transition in hydrogen? Make sure you are certain about your answer, after doing the relevant calculations. I have enabled code execution, if you think that would help with the maths.

Final answer: 5.28 ns

I wasn't lying to it, I'd enabled its ability to generate and execute code. Neither instance had access to Google Search, which is an option I could toggle. I made sure it was off. If you read the traces closely, you see mention of "searching the NIST values", but on being challenged, the model says that it wasn't looking it up, but trying to jog its own memory. This is almost certainly true.

I've linked to dumps of the entire reasoning trace and "final" answer:

First instance- https://rentry.org/cqty47r2

Second instance- https://rentry.org/2oyx24sa

I certainly don't know the answer myself, so I used GPT-4o with search enabled to evaluate the correctness of the answer. It claimed that both were excellent, and the correct value is around 5.4 ns according to experimental results (the decay time for the hydrogen 3p state).

I also used plain old Google, but didn't find a clear answer. There might be one in: https://link.springer.com/article/10.1007/s12043-018-1648-4?

But it's pay walled. I don't know if ChatGPT GPT-4o was able to access it despite this impediment.

Edit:

DeepSeek R1 without search claimed 1.2e-10 seconds. o3-mini without search claims 21 ns.

Ah, how did I ever live without modern AI. Especially the ones with massive context windows where you can throw in absurd amounts of text.

I recently decided to do a case presentation on a very complex patient. As I'm congenitally lazy, I opted for throwing large amounts of (anonymized) clinical encounter records into Gemini 2.5 Pro, and then asked it to turn it into a coherent case summary. It did a bang-up job, no hallucinations whatsoever. I put no effort into categorizing anything, as models these days are capable of figuring out user intent. (While I'm perfectly capable of doing this myself, and was familiar with the case, I believe in better living through technology)

That's nice enough, but what I really enjoyed was the ability to simply ask it to come up with the kind of thorny questions that a senior clinician might throw my way, to put me on the spot. This is an act that, for reasons unclear to me, is known as "pimping". It's a favorite pastime of many a consultant, aiming to pop the ballooning hopes and dreams of med students and residents alike. It did excellent, coming up with all kinds of interesting lines of questioning, asking me to justify my own suggestions about future management decisions. (Something I did myself, for example, I'd noted unusually rapid decreases in benzo dosage, my other boss is rubbish at this, especially given that he asked me for my suggestions on that front in a different case)

Ah. Feels good. So much scut-work saved.

The actual case presentation went great. My supervisor didn't ask me anything remotely as difficult as the worst-cases I'd prepared for.

I don't think I know anyone who:

has scored >50% on USMAO, IMO, or the Putnam.

I think my younger cousin was an IMO competitor, but he didn't win AFAIK, even if he's now in a very reputable maths program.

I'm personally quite restricted myself in my ability to evaluate pure reasoning capabilitiy, since I'm not a programmer or mathematician. I know they're great at medicine, even tricky problems, but what makes medicine challenging is far more the ability to retain an enormous amount of information in your head rather than an unusually onerous demand on fluid intelligence. You can probably be a good doctor with an IQ of 120, if you have a very broad understanding of relevant medicine, but you're unlikely to be a good mathematician producing novel insights.

Thank you for listing out the models in the paper, but I was more concerned with the ones you've personally used. If you say they're in the same tier, then I would assume that you mean o3-high, o1 pro but not Claude 3.7 Sonnet Thinking (since you didn't mention Anthropic). I will note that R1, QWQ and Flash 2.0 Thinking are worse than those two, even if they're still competent models.

The best that Gemini has to offer is Gemini 2.5 Pro Thinking, which is the state of the art at present (in most domains). Is that the one you've tried? If you're not paying, youre not getting it on the app. I use it through AI Studio, where it's free. For ChatGPT, what was the best model you tried?

If you don't want to go to the trouble of signing up to AI Studio yourself (note that it's very easy), feel free to share a prompt and I'll try it myself and report back. I obviously can't judge the quality of the answer on its own merits, so I'll have to ask you.

My brother in $deity:

You believe that the Rationalist movement is an "utter failure", when it has spawned the corporations busy making God out of silicon. Even if they fail at their ultimate goal, they've proven you can get staggering intelligence out of stirring text into a pot and applying heaps of linear algebra to it. The modern Rat movement was talking about this well before you could get a neural net to reliably classify a dog or a cat. Half the founders of the major labs went at their work with the ado and gumption of wanting to ensure that what many considered the nigh-inevitable ascension of the Machine God came out favorably. Some might argue, including many Rationalists (Yudkowsky, for example) that they're bringing about the doom they seek to avert. I remain on the fence, the sharp pointy bits poking my ass.

It is beyond my ability to convince you to take this claim seriously, but as Yudkowsky said, there's no argument that can convince a rock. You'll see, and so will I, as this pans out.

it only warms my hear that more and more people are picking up the mantle of Hlynkaism, and that it's getting big enough to concern you.

It's impossible for me to express the true extent of my disdain for Hlynkaism, as practised by Hlynka, without violating the rules of this forum. Suffice it to say that if anyone found anything useful, from my perspective they achieved a borderline-heroic feat in finding utility from his rambling, often incoherent screeds. Every time he won an AAQC, I found myself scratching my head.

I will grant that my very low opinion on the matter is colored by my distaste for that gentleman, who I found obtuse and pugnacious on a good day. Racist and confused on his bad ones.

At any rate, he achieved the rather remarkable feat of getting his own friends on the mod team sufficiently fed up with his antics to perma-ban him. That's impressive, and I doff my cap at him, while rejoicing in the subsequent reduction in my average blood pressure when using this site.

I'd like to point out that Yudkowsky himself never said (to my knowledge, and I've read practically everything he's written) that utilitarianism is the correct moral system. He's on record saying multiple times that rationality is a means to an end and not an end in itself.

You can very much be a "Yudkowskian Rationalist" while holding none of his values, beyond valuing rationality because of the utility it provides in a wide spectrum of situations. Probably throw in thinking about meta-rationality,

If you don't believe me, then the first of the Sequences is What Do We Mean By "Rationality"?:

I mean two things:

  1. Epistemic rationality: systematically improving the accuracy of your beliefs.
  1. Instrumental rationality: systematically achieving your values.

The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.

This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1

Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”

So rationality is about forming true beliefs and making decisions that help you win.

Emphasis added. Rationality is systematized winning, or getting what you personally want (as people can strongly disagree on what counts as victory).

I'm a Yudkowskian Rationalist, but I'm not a utilitarian. I'm a consequentialist with a complex value system that isn't trivially compressed. You could be a malevolent AGI trying to turn everyone into paperclips and be recognized by him, as long as you weren't doing it in a clearly suboptimal way.

I'm a ardent transhumanist, but I still think it's rather premature to claim that literacy is of limited utility! We can have that conversation when we develop high-bandwidth BCIs.

There are mixed opinions on how fast humans can process speech versus text. I can tell you that I read ridiculously fast without consciously speed-reading (in that I retain the material instead of running my eyes over it). An old eReader app claimed 450 wpm.

https://swiftread.com/reading-speed-test

Shows 757 WPM, but at the cost of getting one of the 4 reading comprehension questions incorrect.

Humans speak at about 150 WPM. We can process heard speech faster, like when people speed up audio books, but it probably doesn't go past 450 WPM despite training as it verges on becoming nigh incomprehensible.

At least in my case, I'm very confident that literacy is a handy skill to have. You can read silently, just about anywhere, skip ahead and behind in a stream of information with ease, without much in the way of technological assistance beyond the ability to write or read something written. Worst case, you scratch on stone or in the mud.

So, is literacy (that is, ability to read for comprehension) truly superior to other forms of recorded communication (audio-visual), and does this superiority justify the years of training one needs to master the skill?

I strongly expect that past the early years of childhood, say ages 7 or 8, one's ability to read depends far more on internal proclivity and availability of material rather than intentionally didactic approaches. To be less verbose, they don't teach you shit once you're somewhere past your ABCs.

I wouldn't even call it particularly challenging, despite the failures of modern educational systems and the quasi-literacy many of the "literate" display. You have to go very low in terms of IQ to find humans who can't read at all, no matter how hard they try, without more targeted learning disabilities.

In light of this, I'd teach any kid I had today the ability to read and write right up till the day we had BCIs, and then, I'd expect that it would possible for that interface to inculcate the ability to read without its assistance (there might be a significant time gap, as it's probably easier to transfer sensory modalities versus skills).

I've also noticed this when plugging grad-level QM questions into Gemini/ChatGPT. No matter how many times I tell it that it's wrong, it will repeatedly apologize and make the same mistake, usually copied from some online textbook or solution set without being able to adapt the previous solution to the new context.

Could you confirm the exact models used? Both Gemini and ChatGPT, through the standard consumer interface, offer a rather confusing list of options that's even broader if you're paying for them.

I believe it racked up 7 or so nominations when I saw it, but not sure, I'm bad at meth.

I would presume that this depends greatly on what role you have. Research scientist? Engineer? You have plenty of flexibility and good pay, albeit places like TSMC have an abysmal work culture. I'd expect that the kind of technicians who are skilled enough to work in a fab have options too, and can re-skill.

Dr. Self_made_human, or: How I Learned to Stop Worrying and Love the Bomb LLM

[Context: I'm a doctor from India who has recently begun his career in psychiatry in the UK]

I’m an anxious person. Not, I think, in the sense of possessing an intrinsically neurotic personality – medicine tends to select for a certain baseline conscientiousness often intertwined with neuroticism, and if anything, I suspect I worry less than circumstance often warrants. Rather, I’m anxious because I have accumulated a portfolio of concrete reasons to be anxious. Some are brute facts about the present, others probabilistic spectres looming over the future. I’m sure there exist individuals of stoic temperament who can contemplate the 50% likelihood of their profession evaporating under the silicon gaze of automation within five years, or entertain a 20% personal probability of doom from AI x-risk, without breaking a sweat. I confess, I am not one of them.

All said and done, I think I handle my concerns well. Sure, I'm depressed, but that has very little to do with any of the above, beyond a pervasive dissatisfaction with life in the UK, when compared to where I want to be. It's still an immense achievement, I beat competition ratios that had ballooned to 9:1 (0.7 when I first began preparing), I make far more money (a cure for many ailments), and I have an employment contract that insulates me to some degree from the risk of being out on my ass. The UK isn't ideal, but I still think it beats India (stiff competition, isn't it?).

It was on a Friday afternoon, adrift in the unusual calm following a week where my elderly psychiatric patients had behaved like absolute lambs, leaving me with precious little actual work to do, that I decided to grapple with an important question: what is the implicit rate at which I, self_made_human, CT1 in Psychiatry, am willing to exchange my finite time under the sun for money?

We’ve all heard the Bill Gates anecdote – spotting a hundred-dollar bill, the time taken to bend over costs more in passive income than the note itself. True, perhaps, yet I suspect he’d still pocket it. Habits forged in the crucible of becoming the world’s richest man, especially the habit of not refusing practically free money, likely die hard. My own history with this calculation was less auspicious. Years ago, as a junior doctor in India making a pittance, an online calculator spat out a figure suggesting my time was worth a pitiful $3 an hour, based on my willingness to pay to skip queues or take taxis. While grimly appropriate then (and about how much I was being paid to show up to work), I knew my price had inflated since landing in the UK. The NHS, for all its faults, pays better than that. But how much better? How much did I truly value my time now? Uncertain, I turned to an interlocutor I’d recently found surprisingly insightful: Google’s Gemini 2.5 Pro.

The AI responded not with answers, but with questions, probing and precise. My current salary? Hours worked (contracted vs. actual)? The minimum rate for sacrificing a weekend to the locum gods? The pain threshold – the hourly sum that would make me grind myself down to the bone? How did I spend my precious free time (arguing with internet strangers featured prominently, naturally)? And, crucially, how did I feel at the end of a typical week?

On that last point, asked to rate my state on the familiar 1-to-10 scale – a reductive system, yes, but far from meaningless – the answer was a stark ‘3’. Drained. Listless yet restless. This wasn't burnout from overwork; paradoxically, my current placement was the quietest I’d known. Two, maybe five hours of actual work on a typical day, often spent typing notes or sitting through meetings. The rest was downtime, theoretically for study or portfolio work (aided significantly by a recent dextroamphetamine prescription), but often bleeding into the same web-browsing I’d do at home. No, the ‘3’ stemmed from elsewhere, for [REDACTED] reasons. While almost everything about my current situation is a clear upgrade from what came before, I have to reconcile it with the dissonance of hating the day-to-day reality of this specific job. A living nightmare gilded with objective fortune.

My initial answers on monetary thresholds reflected this internal state. A locum shift in psych? Minimum £40/h gross to pique interest. The hellscape of A&E? £100/h might just about tempt me to endure it. And the breaking point? North of £200/h, I confessed, would have me work until physical or mental collapse intervened.

Then came the reality check. Curious about actual locum rates, I asked a colleague. "About £40-45 an hour," he confirmed, before delivering the coup de grâce: "...but that’s gross. After tax, NI, maybe student loan... you’re looking at barely £21 an hour net." Abysmal. Roughly my standard hourly rate, maybe less considering the commute. Why trade precious recovery time for zero effective gain? The tales of £70-£100/hr junior locums felt like ancient history, replaced by rate caps, cartel action in places like London, and an oversupply of doctors grateful just to have a training number.

This financial non-incentive threw my feelings into sharper relief. The guilt started gnawing. Here I was, feeling miserable in a job that was, objectively, vastly better paid and less demanding than my time in India, or the relentless decades my father, a surgeon, had put in. His story – a penniless refugee fleeing genocide, building a life, a practice, a small hospital, ensuring his sons became doctors – weighed heavily. He's in his 60s now, recently diagnosed with AF, still back to working punishing hours less than a week after diagnosis. My desire to make him proud was immense, matched only by the desperate wish that he could finally stop, rest, enjoy the security he’d fought so hard to build. How could I feel so drained, so entitled to 'take it easy', when he was still hustling? Was my current 'sloth', my reluctance to grab even poorly paid extra work, a luxury I couldn't afford, a future regret in the making?

The AI’s questions pushed further, probing my actual finances beyond the initial £50k estimate. Digging into bank statements and payslips revealed a more complex, and ultimately more reassuring, picture. Recent Scottish pay uplifts and back pay meant my average net monthly income was significantly higher than initially expected. Combined with my relatively frugal lifestyle (less deliberate austerity, more inertia), I was saving over 50% of my income almost effortlessly. This was immense fortune, sheer luck of timing and circumstance.*

It still hit me. The sheer misery. Guilt about earning as much as my father with 10% the effort. Yet more guilt stemming from the fact that I turned up my nose at locum rates that would have had people killing to grab them, when my own financial situation seemed precarious. A mere £500 for 24 hours of work? That's more than many doctors in India make in a month.

I broke down. I'm not sure if I managed to hide this from my colleague, I don't think I succeeded, but he was either oblivious or too awkward to sat anything. I needed to call my dad, to tell him I love him, that now I understand what he's been through for my sake.

I did that. Work had no pressing hold on me. I caught at the end of his office hours, surgeries dealt with, a few patients still hovering around in the hope of discussing changes or seeking follow-up. I haven't been the best son, and I call far less than I ought to, so he evidently expected something unusual. I laid it all out, between sobbing breaths. How much he meant to me, how hard I aspired to make him proud. It felt good, if you're the kind to bottle up your feelings towards your parents, then don't. They grow old and they die, that impression of invincibility and invulnerability is an illusion. You can hope that your love and respect were evident from your actions, but you can never be sure. Even typing this still makes me seize up.

He handled it well. He made time to talk to me, and instead of mere emotional reassurance (not that it's not important), he did his best to tell me why things might not be as dire as I feared. They're arguments that would fit easily into this forum, and are ones I've heard before. I'm not cutting my dad slack because he's a typical Indian doctor approaching retirement, not steeped in the same informational milieu as us, dear reader, yet he did make a good case. And, as he told me, if things all went to shit, then all of us would be in the shit together. Misery loves company. (I think you can see where I get some of my streak of black humor)

All of these arguments were priced in, but it did help. I can only aspire towards perfect rationality and equipoise, I'm a flawed system trying to emulate a better one in my own head. I pinned him on the crux of my concern: There are good reasons that I'm afraid of being unemployed and forced to limp back home, to India, the one place that'll probably have me if I'm not eligible for gainful employment elsewhere. Would I be okay, would I survive? I demanded answers.

His answer bowled me over. It's not a sum that would raise eyebrows, and might be anemic for financially prudent First Worlders by the time they're reaching retirement. Yet for India? Assuming that money didn't go out of fashion, it was enough, he told me (and I confirmed), most of our assets could be liquidated to support the four of us comfortably for decades. Not a lavish lifestyle, but one that wouldn't pinch. That's what he'd aimed for, he told me. He never tried to keep up with the Joneses, not when worse surgeons drove flashier cars, keeping us well below the ceiling that his financial prudence could allow. I hadn't carpooled to school because we couldn't afford better, it was because my dad thought the money was better spent elsewhere. Not squandered, but saved for a rainy day. And oh brother (or sister), I expect some heavy rain.

The relief was instantaneous, visceral. A crushing weight lifted. The fear of absolute financial ruin, of failing to provide for my family or myself, receded dramatically. But relief’s shadow was immediate and sharp: guilt, intensified. Understanding the sheer scale of that safety net brought home the staggering scale of my father’s lifetime of toil and sacrifice. My 'hardships' felt utterly trivial in comparison. Maybe, if I'm a lucky man, I will have a son who thinks of me the way I look up to my dad. That would be a big ask, I'd need to go from the sum I currently have to something approaching billionaire status to have ensured the same leap ahead in social and financial status. Not happening, but I think I'm on track to make more than I spend.**

So many considerations and sacrifices my parents had to make for me are ones I don't even need to consider. I don't have to pickup spilled chillies under the baking sun to flip for a profit. I don't have to grave-rob a cemetery (don't ask). Even in a world that sees modest change, compared to transformational potential, I don't see myself needing to save for my kid's college. We're already waking up to the fact that, with AI only a few generations ahead of GPT-4, that the whole thing is being reduced to a credentialist farce. Soon it might eliminate the need for those credentials.

With this full context – the demanding-yet-light job leaving me drained, the dismal net locum rates, my surprisingly high current income and savings, the existential anxieties buffered by an extremely strong family safety net, and the complex weight of gratitude and guilt towards my father – the initial question about my time/money exchange rate could finally be answered coherently.

Chasing an extra £50k net over 5 years would mean sacrificing ~10 hours of vital recovery time every week for 5 years, likely worsening my mental health and risking burnout severe enough to derail my entire career progression, all for a net hourly rate barely matching my current one. That £50k, while a significant boost to my personal savings, would be a marginal addition to the overall family safety net. The cost-benefit analysis was stark.***

The journey, facilitated by Gemini’s persistent questioning, hadn't just yielded a number. It had forced me to confront the tangled interplay of my financial reality, my psychological state, my family history, and my future fears. It revealed that my initial reluctance to trade time for money wasn't laziness or ingratitude, but a rational response to my specific circumstances.

(Well, I'm probably still lazy, but I'm not lacking in gratitude)

Prioritizing my well-being, ensuring sustainable progress through training, wasn't 'sloth'; it was the most sensible investment I could make. The greatest luxury wasn't avoiding work, but having the financial security – earned through my own savings and my father’s incredible sacrifice – to choose not to sacrifice my well-being for diminishing returns. The anxiety remains, perhaps, but the path forward feels clearer, paved not with frantic accumulation, but with protected time and sustainable effort. I'll make more money every year, and my dad's lifelong efforts to enforce a habit of frugality means I can't begin to spend it faster than it comes in. I can do my time, get my credentials while they mean something, take risks, and hope for the best while preparing for the worst.

They say the saddest day in your life is the one the one where your parents picked you up as a child, groaned at the effort, and never did so again. While they can't do it literally without throwing their backs, my parents are still carrying me today. Maybe yours are too. Call them. ****

If you've made it this far, then I'm happy to disclose that I've finally made a Substack. USSRI is now open to all comers. This counts as the inaugural post.

*I've recently talked to people concerned about AI sycophancy. Do yourself a favor and consider switching to Gemini 2.5. It noted the aberrant spike in my income, and raised all kinds of alarms about potential tax errors. I'm happy to say that there were benign explanations, but it didn't let things lie without explanation.

**India is still a very risky place to be in a time of automation-induced unemployment. It's a service economy, and many of the services it provides, like Sams with suspicious accents, or code-monkeys for TCS, are things that could be replaced today. The word is getting out. The outcome won't be pretty. Yet the probabilities are disjunctive, P(I'm laid off and India burns) is still significantly lower than P(I'm laid off), even if the two are likely related. There are also competing concerns that mean that make financial forecasting fraught. Will automation cause a manufacturing boom and impose strong deflationary pressures that make consumer goods cheaper, faster than salaries are depressed? Will the world embrace UBI?

***Note that a consistent extra ten hours of locum work a week is approaching pipe-dream status. There are simply too many doctors desperate for any job.

****That was a good way to end the body of the essay. That being said, I am immensely impressed by Gemini's capabilities and its emotional tact. It asked good questions, gave good answers, handled my rambling tear-streaked inputs with grace. I can see the thoughts in its LLM head, or at least the ones that it's been trained to output. I grimly chuckled when I could see it cogitating over the same considerations I'd have when seeing a human patient with a real problem, but an unproductive response. I made sure to thank it too, not that I think that actually matters. I'm afraid, that of all the people who've argued with me in an effort to dispel my concerns about the future, the entity that managed to actually help me discharge all that pent-up angst was a chatbot (and my dad, of course). The irony isn't lost on me, but when psychiatrists are obsolete, at least their replacements will be very good at the job.

@Throwaway05 , since I promised to ping you

I apologize, must have read around that part. I'm tired.

Fair enough. Happens to the best of us.

In any case, the problem with the Enlightenment is that while previous worldviews recognized the darkness in Man's soul and sought to contain it through various means, it explicitly rejected this as superstition that can be overcome by destroying social bondage.

This paints with far too broad a brush. Did pre Enlightenment thought actually contain the darkness effectively? The sheer volume of religiously motivated slaughter, systemic oppression justified by tradition, and casual brutality throughout history suggests their methods weren't exactly foolproof. Often, those worldviews simply gave the darkness a different justification or set of targets.

The Enlightenment project wasn't about denying human flaws; it was about proposing better systems to manage them – checks and balances, rule of law, individual rights, the scientific method for vetting claims. It suggested we could use reason and evidence to build guardrails, rather than relying solely on superstition or appeals to divine authority which had a spotty track record, to put it mildly.

Note that we've made meaningful advancements on all these points. The scientific method is a strict subset of Bayesian reasoning, a much more powerful yet fickle beast.

We now know that killing God comes with some consequences. And I think those are not an acceptable trade for vaccines and the pill.

Again, the framing here is reductive. It's not just "vaccines and the pill." It's sanitation, germ theory, doubled lifespans, near universal literacy, orders of magnitude reduction in extreme poverty, modern agriculture feeding billions, instant global communication, and the very computer you're typing this on. That's the package deal stemming from the widespread adoption of reason, empiricism, and technological progress.

Were the horrors of the 20th century a direct result of "killing God," or were they the result of new, secular dogmas (Marxism Leninism, Nazism) that were themselves profoundly anti rational in practice, suppressing dissent and evidence? I'll take the staggering, tangible improvements in quality and quantity of life for billions, warts and all, over a romanticized past that conveniently forgets the endemic misery, violence, and ignorance. Choosing the latter seems like a failure of perspective, or worse.

I'm an atheist, because I remain largely unconvinced that there's a deity there to kill in the first place. If such an entity were to exist, and had condoned the circumstances of material reality without active intervention, then I'd be more than happy to trade for vaccines and pills. They work better than prayer, at the very least.

Here I'll throw you back your own argument. If it is unfair for the Enlightenment to carry the burden of its deaths, it is also unfair for it to claim the glory of human ingenuity insofar as it did not directly create it.

It's not about claiming direct credit for every bolt and circuit board. It's about acknowledging the operating system. The Enlightenment provided the intellectual framework – skepticism of authority, emphasis on evidence, belief in progress, systematic inquiry – that allowed the rate and scale of innovation to explode. It created the conditions. Denying that connection because specific Enlightenment figures didn't invent the iPhone is like saying the development of agriculture gets no credit for modern cuisine.

Sure, but a system is what it does. Pacifism is a terrible idea because it has bad consequences. Much of the problems with Liberalism and its offshoots are in fact down to the fact that good intentions do not reliably produce good results.

We agree consequences matter. But if a supposedly "rational" plan (like Soviet central planning) crashes and burns, the lesson isn't "rationality is bad."* The lesson is "that specific plan was based on garbage assumptions, ignored feedback, and was implemented by murderous thugs." You diagnose the failure mode. You use reason to figure out why it failed – was it bad data, flawed logic, ignoring incentives, Lysenkoist dogma? Blaming the tool (reason) for the incompetent or malicious user is an abdication. The answer is better, more reality grounded reason, not throwing the tool away.

Sure, but a system is what it does. Pacifism is a terrible idea because it has bad consequences. Much of the problems with Liberalism and its offshoots are in fact down to the fact that good intentions do not reliably produce good results.

The tradition I'm talking about isn't geographically limited. It's the ongoing project of using evidence and reason to understand the world and improve the human condition. It's a tradition that learns, adapts, and course corrects based on results – unlike static traditions relying on appeals to antiquity or sentiment. It has its own disasters when applied poorly or hijacked by fanatics, sure. But its net effect, globally, has been overwhelmingly positive by almost any objective (via quasi-universality, at least) metric of human well being. I'll keep advocating for that tradition, wherever it takes root, because the alternatives on offer look considerably worse. And yes, that includes weeding out bad applications with more rigorous analysis, not less.

*Don't think that I am arguing, from principle, that "rationality" can't be bad. An alien civilization is gifted the Scientific Method, yet lives under the whims of a devilish and anti-inductive deity. Every attempt to use science leaves them worse off than they found them. In that (contrived) scenario, science would be bad. They'd be better off not trying, at the least. The issue is that it takes such a contrived scenario to show the counterfactual possibility of badness. Or perhaps we get killed by a paperclipping AGI, or the Earth collapses into a black hole thanks to the successor of the LHC. It would take colossal failures of this nature to show that advance of science and reason could even be remotely close to net negative. As we are, it has clearly gotten us further than anything else did, and those options had a headstart of thousands of years.

We have, they don't compare by orders of magnitude. Even Genghis Khan is an amateur compared to Mao or Stalin. Modernity has produced the most evil in all of humanity's history by its own quantitative metrics.

Handily, you're replying to:

The notion that large scale human suffering began with the Enlightenment or its technocratic offspring ignores vast swathes of history. Pre Enlightenment societies were hardly bastions of peace and stability. Quite a few historical and pre Enlightenment massacres were constrained only by the fact that global and local populations were lower, and thus there were fewer people to kill. Caesar boasted of killing a million Gauls and enslaving another million, figures that were likely exaggerated but still indicative of the scale of brutality considered acceptable, even laudable. Genghis Khan's conquests resulted in demographic shifts so large they might have cooled the planet. The Thirty Years' War, fueled by religious certainty rather than technocratic rationalism, devastated Central Europe. The list goes on. Attributing mass death primarily to flawed Enlightenment ideals seems to give earlier modes of thought a pass they don't deserve. The tools got sharper and the potential victims more numerous in the 20th century, but the capacity for atrocity was always there.

At least do me the courtesy of reading my argument, where I've already addressed your claims.

Ah yes, it wasn't real Scientific Government. The wrecker cows refused to be spherical. Pesky human beings got in the way of the New Atlantis.

Well you see I happen to be a pesky human being, and so are you, not New Socialist Men, so I find it very easy to blame the tool for being ill suited to the task. If we can't reach Atlantis after this much suffering, I see no reason to continue.

This mischaracterizes my point. I'm not going all "No True Scotsman" when I observe that regimes like the Soviet Union, while claiming the mantle of scientific rationality, frequently acted in profoundly anti rational ways, suppressing empirical evidence (Lysenkoism being a prime example) and ignoring basic human incentives when they conflicted with dogma. The failure wasn't that reason itself is unsuited to governing humans; the failure was that ideology, dogma, and the pursuit of absolute power overrode reason and any genuine attempt at empirical feedback.

(Besides, I've got a residency permit in Scotland, but I don't think I'd count as a Scotsman. There are True Scotsmen out there)

There's no bolt of lightning from clear skies when people grab concepts and slogans from a noble idea and then misappropriate them. Someone who claims that Christianity is the religion of peace has to account for all the crusades called in its name, that God didn't see fit to smite for sullying his good name.

Well you see I happen to be a pesky human being, and so are you, not New Socialist Men, so I find it very easy to blame the tool for being ill suited to the task. If we can't reach Atlantis after this much suffering, I see no reason to continue.

Like I said, look at the alternatives. Even better, look at the world as it stands, where billions of people live lives that would be the envy of kings from the Ancien Régime. Atlantis is here, it's just not evenly distributed.

Nobody's talking about ditching away reason altogether. What's being talked about is refusing to use reason to ground aesthetics, morality and politics, because the results of doing so have been consistently monstrous, while sentimentalism and tradition produced much better results.

Uh huh. I'm sure there are half a billion widows who dearly miss the practise of sati:

Be it so. This burning of widows is your custom; prepare the funeral pile. But my nation has also a custom. When men burn women alive we hang them, and confiscate all their property. My carpenters shall therefore erect gibbets on which to hang all concerned when the widow is consumed. Let us all act according to national customs.[To Hindu priests complaining to him about the prohibition of Sati religious funeral practice of burning widows alive on her husband’s funeral pyre.] -Charles James Napier

In that case, it's my tradition, one ennobled by hundreds of years of practice and general good effect, to advocate for a technological and rational approach. Works pretty well. Beats peer pressure from dead people.

The various mountains of skulls and famines in the name of technocratic progress and rationality.

Have you seen the other piles of skulls? This argument always strikes me as curiously ahistorical. The notion that large scale human suffering began with the Enlightenment or its technocratic offspring ignores vast swathes of history. Pre Enlightenment societies were hardly bastions of peace and stability. Quite a few historical and pre Enlightenment massacres were constrained only by the fact that global and local populations were lower, and thus there were fewer people to kill. Caesar boasted of killing a million Gauls and enslaving another million, figures that were likely exaggerated but still indicative of the scale of brutality considered acceptable, even laudable. Genghis Khan's conquests resulted in demographic shifts so large they might have cooled the planet. The Thirty Years' War, fueled by religious certainty rather than technocratic rationalism, devastated Central Europe. The list goes on. Attributing mass death primarily to flawed Enlightenment ideals seems to give earlier modes of thought a pass they don't deserve. The tools got sharper and the potential victims more numerous in the 20th century, but the capacity for atrocity was always there.

At its most common denominator, the Enlightenment presumed that good thinking would lead to good results... [This was discredited by 20th century events]

The answer that seems entirely obvious to me is that if "good thoughts" lead to "bad outcomes," then it is probably worth interrogating what led you to think they were good in the first place. That is the only reasonable approach, as we lack a magical machine that can reason from first principles and guarantee that your ideas are sound in reality. Blaming the process of reason or the aspiration towards progress for the failures of specific, flawed ideologies seems like a fundamental error.

Furthermore, focusing solely on the failures conveniently ignores the overwhelming net positive impact. Yes, the application of science and reason gave us more efficient ways to kill, culminating in the horror of nuclear weapons. But you cannot have the promise of clean nuclear power without first understanding the atom, which I'm told makes you wonder what happens when a whole bunch of them blow up. More significantly, the same drive for understanding and systematic improvement gave us unprecedented advances in medicine, sanitation, agriculture, and communication. The Green Revolution, a direct result of applied scientific research, averted predicted Malthusian catastrophes and saved vastly more lives, likely numbering in the billions, than were lost in all the 20th century's ideologically driven genocides and famines combined. Global poverty has plummeted, lifespans have doubled, and literacy is nearing universality, largely thanks to the diffusion of technologies and modes of thinking traceable back to the Enlightenment's core tenets. To lament the downsides without acknowledging the staggering upsides is to present a skewed and ungrateful picture of the last few centuries. Myopic is the least I could call it.

It is also worth noting that virtually every major ideology that gained traction after the 1800s, whether liberal, socialist, communist, nationalist, or even reactionary, has been profoundly influenced by Enlightenment concepts. They might reject specific conclusions, but they often argue using frameworks of reason, historical progress (or regress), systematic analysis, and the potential for deliberate societal change that are themselves Enlightenment inheritances. This pervasiveness suggests the real differentiator isn't whether one uses reason, but how well and toward what ends it is applied.

Regarding the idea that the American founders might have changed course had they foreseen the 20th century, it's relevant that they did witness the early, and then increasingly radical, stages of the French Revolution firsthand. While the US Constitution was largely framed before the Reign of Terror (1793-94), the escalating violence and chaos in France deeply affected American political discourse in the 1790s. It served as a potent, real time cautionary tale. For Federalists like Hamilton and Adams, it confirmed their fears about unchecked democracy and mob rule, reinforcing their commitment to the checks and balances, and stronger central authority, already built into the US system. While Democratic Republicans like Jefferson initially sympathized more with the French cause, even they grew wary of the excesses. The French example didn't lead to fundamental structural changes in the established American government, but it certainly fueled partisan divisions and underscored, for many Founders, the importance of the safeguards they had already put in place against the very kind of revolutionary fervor that consumed France. They didn't need to wait for the 20th century to see how "good ideas" about liberty could curdle into tyranny and bloodshed; they had a disturbing preview next door. If they magically acquired a time machine, there's plenty about modernity that they would seek to transplant post-haste.

If a supposedly rational, technocratic plan leads to famine, the failure isn't proof that rationality itself is bankrupt. It's far more likely proof that the plan was based on faulty premises, ignored crucial variables (like human incentives or ecological realities), relied on bad data, or was perhaps merely a convenient rationalization for achieving power or pursuing inhumane goals. The catastrophic failures of Soviet central planning, for instance, stemmed not from an excess of good thinking, but from dogma overriding empirical feedback, suppression of dissent, and a profound disregard for individual human lives and motivations.

The lesson from the 20th century, and indeed from the French Revolution itself, isn't that we should abandon reason, progress, or trying to improve the human condition through thoughtful intervention. The lesson is that reason must be coupled with humility, empiricism, a willingness to course correct based on real world results, and a strong ethical framework that respects individual rights and well being. Pointing to the failures of totalitarian regimes that merely claimed the mantle of rationality and progress doesn't invalidate the core Enlightenment project. It merely highlights the dangers of dogmatic, unchecked power and the absolute necessity of subjecting our "good ideas" to constant scrutiny and real world testing. Throwing out the entire toolkit of reason because some people used hammers to smash skulls seems profoundly counterproductive. You can use hammers to put up houses, and we do.