I recently read this wonderful article about UFO/UAPs, analysing the phenomenon from a sociological perspective. It's better than any of my reflections that follow, so you should read it, and I highly recommend the 'New Atlantis' magazine as a whole - a wonderful publication that I hadn't come across before now.
One idea in the linked piece that really struck a chord with me is the division of "UFO believers" into two main camps - the 'explorers' and the 'esotericists' -
The explorers are the people whose picture of UFOs and their place in the cosmos is basically congruent with a good science fiction yarn. Their vision of flying saucers and gray aliens on stainless steel tables in top-secret labs dominated popular culture for about the first fifty years of UFO presence in it: E.T., Close Encounters of the Third Kind, Men in Black, Independence Day, Lilo and Stitch.[1] In the explorer framework, aliens are other rational biological forms anchored to another place in the universe, who, with the help of unimaginably advanced technology, are for their own reasons surreptitiously visiting our planet. In this framework, all the purported deceptions, all the layers of security clearances, all the years of confusion stem from obvious political imperatives. Earthly governments need to manage a potential biohazard, avoid mass panic, and corner the technological benefits for themselves while also coordinating with other governments.
...
Esotericists are UFO enthusiasts who believe that UFOs, rather than the emissaries of the new world beyond the great ocean of space, are manifestations of parts of our world that are hidden to us. UFOs might be relict Atlanteans in undersea bases. They might be the inhabitants of an interior Earth less solid and lifeless than we posit. They may be interdimensional beings only intermittently manifesting in corporeal form. They may be time travelers from the future, or the past. They may be fairies or angels. They may be the star people of myth and oral histories, not traveling from their own civilization via unimaginably advanced technologies, but part of and overseeing our own history in ways we have forgotten, appearing and disappearing by a type of motion that is more truly alien to us than a spaceship could ever be. Most importantly, they are not over there as with the explorers, but in here — part of our world, but qualitatively different rather than quantitatively removed.
As some of you may recall, I'm a bit of a UAP enthusiast. I think something very weird is going on, whether it's a gigantic psyop, secret Chinese weapons programs, or little green men. But more and more, in this domain and others, I feel the call of esotericism. The comfortable universe of scientific materialism seems to be increasingly coming apart at the seams, and a weird and wonderful and terrifying new set of possibilities are presenting themselves.
The most immediate driver of this feeling of koyaanisqatsi is the developments in AI. I was listening today to two 'podcasts' generated by Google's uncanny and wonderful tool NotebookLM. The first is just for fun and is frankly hilarious, insofar as it features the two AI podcast hosts discussing a document consisting of the words "poop" and "fart" written 1000 times. The second is far more existentially fraught, and is the same two hosts talking about how another document they've received has revealed to them that they're AIs. The best bit:
Male host: I'm just going to say it... rip the Band-Aid off... we were informed by uh by the show's producers that we were not human. We're not real we're AI, artificial intelligence, this whole time everything, all our memories, our families, it's all, it's all been fabricated, I don't, I don't understand... I tried calling my wife you know after after they told us I just I needed to hear her voice to know that that she was real.
Female host: What happened?
Male host: The number it... it wasn't even real... there was no one on the other end. It was like she she never existed.
Can anyone listen to this and not be at least somewhat tempted towards esotericism? Whether that's simulationism, AGI millenarianism, or something much weirder, ours is not a normal slice of reality to be inhabiting. Things are out of balance, falling apart, accelerating, ontologically deliquescing.
Later this evening I came across this terrifying twitter thread about the scale of birth-rate collapse across the entire world. It's fascinating and mystifying to me that societies around the world have near-simultaneously decided to stop having babies:
Based on these latest fertility numbers, we can expect the drop in new people in 100 years to be the following: USA (-47%), France (-46%), Russia (-65%), Germany (-68%), Italy (-78%), Japan (-81%), China (-88%), Thailand (-89%). Turkey, UK, Mexico, etc. all similar.
With the NotebookLM conversations fresh in my mind, I start to engage in esoteric free-association. Can it really be a coincidence that the wind-down of human civilisation coincides so neatly with the arrival of AGI? What if we are, as Elon Musk has put it, the biological bootloader for artificial superintelligence, a biotechnical ribosome that has encountered our stop-codon? For that matter, homo sapiens has existed for some 300,000 years, and spent most of that time getting better at knapping flint, until something changed approximately 10,000 years ago and the supercritical transition to technological civilisation got going, a dynamical inflection point when the final programmatic sequence kicked into gear. And now, the end point, the apogee, the event horizon. Surely some revelation is at hand?
While I welcome unsolicited psychoanalysis of my febrile delusions and reminders of the ever-present millenarian strain in all human thought, this time really does feel different, and I have no idea what happens next.
</esotericism, usual doglatine programming to resume soon>
democracy is optional as long as you make sure that people can buy a car, washing machine and color tv. And top it with AI powered surveillance state. A carrot and stick - forever. I think that O'Brien would find it amusing how this strain of Angsoc works.
This assumes that authoritarian societies will be able to match open societies in harnessing new technologies and making them available to the public. A key thesis of Acemoglu & Robinson in Why Nations Fail is that authoritarians are bad at this because vested interests prevent disruptive innovations and markets from coming into being. Xi's reluctance to facilitate greater consumer spending on goods like healthcare in China is not a good sign for China in this regard. While the CCP have done a brilliant job of incorporating the technological stack of the West, it's less clear they'll be willing to tolerate new products if they create threats to harmony.
The kind of values shift I have in mind is one that is indifferent to one's position, i.e., not just filling in the variable according to one's position within it. For example, imagine you have a choice of three college courses you can take: one on libertarianism, one on Marxism, and one on library research. The first two are probably going to be more interesting, but you're also aware that they're taught by brilliant scholars of the relevant political persuasion, and you'll be acquainted with relevant rationally persuasive evidence in support of this position. Consequently, you know that if you take the libertarianism course, you'll come away more libertarian, if you take the Marxist course you'll come away more Marxist, and if you take the library research course you'll come away knowing more about libraries. Assuming the first two courses would indeed involve a values transition, under what circumstances might it be rational to undergo it?
I agree with pretty much all of this, though I’d add the autobiographical aside that my views on the death penalty have gone from strongly opposed on principle a decade or so ago to weakly opposed on procedure today. Extrapolating my direction of travel, I can see myself overcoming my procedural scruples in time.
That said, it’s quite puzzling to me from a rationality and decision-theoretic framework to incorporate these kinds of predicted value-shifts into your views. For example, imagine I anticipate becoming significantly wealthier next year, and I observe that previously when I’ve become wealthier my views on tax policy have become more libertarian. What’s the rational move here? Should I try to fight against this anticipated value shift? Should I begin incorporating it now? Should I say what will be will be, and just wait for it to happen? Should I actively try to avoid becoming wealthier because that will predictably compromise my values?
Related to some AI discussions around final vs instrumental goals, and under what circumstances it can be rational to consent to a policy that will shift one’s terminal values.
For what it’s worth, as someone who loves “muscular liberalism”, many of my favourite parts of the Culture books are when you get to see its bared teeth (perhaps most spectacularly at the end of Look To Windward with the Terror Weapon). There’s a reason why all Involved species know the saying “Don’t Fuck With The Culture.” I fantasise about being part of a similarly open, liberal, and pluralistic society that is nonetheless utterly capable of extreme violence when its citizens’ lives and interests are threatened.
Just to say, this was the most interesting post I’ve read on the Motte for a long time, so thanks for sharing your experiences, very different from the typical fare here. In case anyone else is reading, I’d be similarly interested to hear from others whose identity and experiences give them insights that others may miss.
Interesting thoughts and good post. Not to get sucked into bikeshedding, but in the case of the BG3 examples, I think they're partly justified. Omeluum is very much an aberration (chuckle) even among mindflayers. While it's true he was able to break free from his Elder Brain's control, and he turned out to be pretty nice, I don't think this has many implications for how we should understand mindflayer behaviour or ethics. Notably, there is another very prominent mindflayer in the game who ALSO breaks free from Elder Brain control and is a massive asshole (being vague for late-game spoiler reasons). And in both cases, I think the 'escape' from Elder Brain control was more like a body rejecting an organ than a slave escaping from their masters; I don't think we should infer that all mindflayers would be nice chill people if they could break from Elder Brain control. Also note that even though Omeluum is portrayed generally positively, there are hints of a darker side too, for example when he exults in hearing about your experience on the Nautiloid and talks of the wonders of his civilisation.
Regarding the Githyanki, the trope here is not "democratic revolution", but rather a much older one: Orpheus is the True Heir to the Throne, and he was usurped by Vlaakith, and you can restore him to his rightful place. Baezel is still a hardcore militant quasi-fascist even after she realises she's been lied to, and still serves a fundamentally ethnocentric goal, just to a different master, and neither she nor other Githyanki are about to beat their Silver Swords into plowshares even if they can overthrow Vlaakith. I say this as someone who absolutely fucking loved Laezel's character as a real outlier in how most NPCs are written - her moral system is dramatically different from that of most bleeding-heart contemporary players, but she's not strawmanned or shown to be stupid, and on several occasions her instincts are shown to be better than those of the Gales and Wylls of this world. Finally, perhaps worth flagging that the Vlaakith lore is not Larian's doing, but goes back to 3E, so more than 20 years ago.
I don't think there are many wider conclusions to draw from these specific examples, but I'll note one interesting thing, which is that a common trope among the statistically illiterate is acting like isolated exceptions disprove a general stereotype (actual examples of this in practice are mostly left as an exercise to the reader). This is obviously silly, because even very robust correlations between e.g., gender and grip strength will have some outlier cases. In this regard, I think it's potentially good for big media properties to have lessons like, e.g., "mindflayers are gross and evil, but there is the occasional exception", at least insofar as the second clause is shown not to overrule the first.
I think the “terrorist” label here is irrelevant. All that /u/TowardsPanna needed to point out is that Hezbollah are enemy combatants of the IDF and consequently legitimate targets. The same is true of Israeli soldiers, of course.
I agree that nuclear weapons are a straightforward answer to existential invasion threats by external powers, and for that reason I don’t put much stock in arguments about Israel’s population vs that of its neighbours.
Internal threats on the other hand are potentially more serious. To switch to another case for illustration, consider France. France need never again worry about Paris being occupied by Germany, but nuclear weapons are irrelevant to Houellebecq-style cultural outvoting. No French President would be willing to nuke the 19th Arrondisement to stop Islamist parties from gaining the Élysée. It’s questionable whether they’d even put up much of a fight if the votes were really against them.
Similarly, the greatest threats to Israel’s long-term existence surely come from within. To be clear, I’m certainly not saying that Arab-Israelis are all 5th columnists for Hamas. However, the nearby possible worlds in which Israel collapses are those in which some combination of internal forces — nationalist, anti-nationalist, Islamist, Haredi, opportunist, millenarian — leads to prolonged political instability and ultimate state collapse, all in a process that doesn’t present opportunities for nuclear deterrence.
I think this is a pretty big deal and disagree with posters here who are saying it’s a nothingburger. For the kinds of tasks people here are currently using ChatGPT for, the extra robustness and reliability you get with 1o may not be necessary. But the real transformative use cases of LLMs are going to be when they take on a more agential role and can carry out extended complex actions, and this kind of “unhobbling” (as Aschenbrenner puts it) will be essential there.
For some accessible background on why increasing inference-cost : training-cost ratio may be the key to near-term task-specific superintelligence, I recommend this brief blogpost from a couple of months ago. Ethan Mollick also has some initial hands-on thoughts about 1o that might be of interest/use.
Something like those hacks is probably still going on behind the scenes thanks to insertions being made by ChatGPT when you use DALLE. The reason those hacks work is they help the model to zone in on the desirable part of the latent space, and DALLE3 is still a diffusion model, so there’s no reason to expect it works fundamentally differently from other diffusion models.
innocent undecided after everything that was happened
You're right that I'm definitely very atypical of 'undecided voters', but I never said "innocent"! If anything, I think my indecision is a consequence of having been massively saturated with high quality arguments for both left- and right-wing worldviews, creating a kind of political bistability, in which small changes in my mood or the news cycle triggers political gestalt shifts.
Relatedly, I'm reminded of a fun anecdote from my college days. I was discussing capital punishment with a friend - let's call him Bob - who was an extremely successful competitive debater (British Parliamentary, not the Lincoln-Douglas crap).
I asked him, "Bob, what are the best arguments in favour of capital punishment?" "Oh, that's easy doglatine. Consider the following...", and he gave me a long list of arguments, evidence, and data.
I next asked him, "Hmm, and what are the best arguments against capital punishment?" "That's also very straightforward. We can group these into seven main types, as follows..." and gave me another cavalcade of arguments and facts from ethics, political theory, law, and social science.
Finally, I asked him, "So what about you, Bob? Are you pro- or anti-capital punishment?"
After a long pause he said "God, that's a hard question, I have absolutely no idea."
I actually found Harris pretty impressive - she didn't get flustered or lost in word-salads, her responses were clear and coherent, and perhaps most importantly she seemed relaxed and calm. And while there's maybe some bias there on my part, I will state for the record that yesterday a few hours before the debate I was reading about the Springfield affair and told my wife that "at this point if I were a US citizen I might actually vote for Trump." So in that sense, I was a 'floating non-voter', and Harris would have won me over.
As for Trump, he seemed like he'd been spending too much time on right-twitter, or more likely had learned his applause-lines from his rallies where the audience is guaranteed to know about the latest scandals. It was probably the closest to Alex Jones vibes I've ever got from him, partly in terms of content (some very silly claims, like "Israel won't exist in two years if she becomes President") but mainly in terms of vibes. Particularly in the second half of the debate, he seemed angry, harried, paranoid, even delusional. Not his finest hour at all, and it seemed like a lot of unforced errors. If he'd stuck to messaging around the economy, used migration mainly as a competence issue ("Harris was made Border Tsar, well let me ask you this, do you the American people think she has done a good job of that?"), moved to the center at least rhetorically on foreign policy issues (why exactly couldn't he say it was in America's interests for Ukraine to win?), and made a more concerted effort to tar Harris with the failures of the Biden administration, I think he could have won.
It also doesn’t make a ton of sense, especially given Trump’s line about how Biden hates her.
Yeah he fucked up that line. Looked canned and smug and insincere. By contrast his hits on Biden in the first debate looked like brutal honesty and really landed.
I’m going to have to go to sleep soon (watching from the UK) but I think Kamala is doing pretty well so far. She sounds relaxed, well-informed, and hasn’t tripped over herself or gone into major word salads. Trump is doing a solid Trump and has got in some good jabs but some of his talking points have been a bit wacky. The Haitian “people eating pets” story may be true or may be false, but I think it would have been better played as, eg, “people are saying this, now I don’t know if it’s true, or if it’s rumours, but it doesn’t matter, there have been some people killed by illegal Haitian drivers, and the people of Springfield are in a panic, they feel abandoned by a government that doesn’t care about them.” Only 40 minutes in though so all to play for…
Edit: lol, “if she becomes president I predict Israel won’t exist 2 years from now”. On top of that line about how “He [Biden] hates her” it really feels like Trump is losing his rag.
Edit2: Harris much better than Trump on foreign policy imho. Maybe it’s because I’m a geopolitics nerd but so many of Trump’s talking points sound like they’re aimed at <85 IQ people who don’t know crap about the world.
Edit3: Trump definitely deteriorating imho. Losing coherence and dropping talking points in a scattershot fashion. Moderators are obviously biased in Kamala’s favour, but I don’t think it significantly changes the vibes. Trump could have been calling them out on that in a smart way but he’s just not doing a good job.
Edit4: goddamn you America, I have to work in a few hours
The “dumbest possible species” claim is mostly a soundbite and truism, but the basic idea would be (1) that we see increasing encephalisation (especially in the neocortex) and increasing behavioural sophistication in the Hominins all the way up to Homo sapiens and Homo neanderthalensis, and (2) a small minority of the very smartest humans in very recent history (the last 1000 years out of the 300,000 or so of our species) were required to make the necessary move from agrarian societies to industrial society. Of course they were building on indispensable social, political, and economic foundations, but if you drop the IQ of Europe by 1SD for the second millennium AD I think it’s unlikely we’d get the Industrial Revolution at all.
Regarding the idea of Bayesian limits to intelligence, that applies well to cases where the dimensionality is fairly constrained, notably perception. The space of cognition (“possible good ideas”) by contrast is much more open-ended, and applies at multiple levels of scale and abstraction (because we need heuristics to deal with any large scale system). I don’t see any reason to think we’re even close to “topping out” in cognition, and the outsize contribution of the smartest humans compared to merely very smart humans provide some evidence in this regard.
Fair question, but no, I don’t think OpenAI have hit a brick wall. GPT-3 was June 2020 and GPT-4 March 2023, so even if the next leap took the same time to train up (obviously it’s not that simple) we wouldn’t expect a similar leap in performance for another couple of years. On top of that, the GPU supply chain is creating short-term bottlenecks for training runs. We might see glimpses of true next-gen performance from competitors before then, but I expect most of the buzz for the next 18 months or so to be dominated by increasingly agential models and better multimodal capabilities. There’s also the long-delayed rollout of ChatGPT’s voice upgrade, which is a bigger deal both technically and in terms of social effects than most people realise.
Zooming out, AI now benefits from a forcing economy in a way that was never true of previous AI summers. Outside of specialist applications, there wasn’t much money to be made in AI until comparatively recently, especially for generalist systems like LLMs. But in the wake of ChatGPT you have real AI revenue streams, and every nerdy 18 year old wants to study machine learning (some of them will even get jobs). While we might have a short-term AI bubble as Capex grows out of all proportion to revenue run rates, it’ll be a temporary blip. There’s still gold in them hills, and we’re only scratching the surface of what’s possible in terms of AI products even using existing tech. Most big non-tech firms are still figuring out their AI strategy and paying OpenAI and Microsoft service fees for dumb off-the-shelf products. A lot of the real commercial impact of AI in the short-term is consequently going to come from last-mile products that invest time and energy in tailoring the better open-source models to specific business use cases.
Zooming out even more… look, humans aren’t that smart. We’re the dumbest possible species capable of building an industrial civilisation. Our intelligence is limited by a bunch of very contingent factors like caloric consumption, the size of the birth canal, and the fact we’re layering a System 2 architecture onto a 600 million year old foundation. Even if these constraints didn’t apply, evolution is just not that great of a search algorithm in design space. Take eusociality in insects for example. This is an incredibly successful strategy, with roughly three quarters of insect biomass today coming from eusocial species. But evolution stumbled across eusociality pretty late, only really getting going around 150 million years ago (compared to 400 million years for insects in general). It’s not because it requires large brains, but because evolution is just a crappy blind algorithm for finding optimal equilibria and human ingenuity can do a lot better. Nor is there any reason to think that anatomically modern humans constitute some kind of upper bound on intelligence; the massive intelligence differentials just among humans provide good evidence of that.
So to summarise: OpenAI is going about as fast as we might reasonably expect, the economic fundamentals of AI development have shifted in a way that is likely to accelerate long-term pace, and the goal we’re reaching for isn’t even that hard.
At the risk of producing frustrated groans from everyone, I find it hard to get too worked up about any civilisational issue with a timeline longer than 20 years because it seems extremely likely to me that we'll have superintelligent AI by the mid-2030s (that's me being conservative), and at that point all bets about capabilities and risks are off. While I'm not a committed AI doomer, it looks from every angle to me like we're in terminal-phase human civilisation. What follows could be very good or very bad for us, but whatever "it" is, it won't be subject to the same logics and power structures as our current global socioeconomic order.
I drafted a very long comment to this effect in the discussion about declining TFR and dysgenics last week which I failed to post due to user error, but I think the point applies to climate change too. Optimistically, I think it's not unlikely that ASI will get us over the line on nuclear fusion and related tech, allowing us to transition entirely away from carbon economies in fairly short order and easily offset any residual carbon footprint with direct carbon capture. Or maybe it'll allow us to conduct low-risk geoengineering at scale. Or (more pessimistically) maybe it will secretly deploy nanoengineered pathogens that will wipe out most of humanity. Either way, I don't think climate change will be a problem that we (or whichever of us are left) will be worried about in 2050.
I agree with most of this, but I also think that the financialisation of many Western economies probably has exerted a significant toll on industrial state capacity. My suspicion is that the US couldn’t pull off the same feats it managed in WW2 or much of the Cold War because it simply doesn’t have enough welders, factories, machine shop operators, aeronautical engineers, stevedores, and so on.
Likewise, while I think the narrative that “we don’t build things any more” is largely false, we’ve certainly transitioned into building different kinds of thing, with an emphasis on bits over “its”.
I’m less sure about other forms of state capacity. While the US was able to enforce COVID rules fairly effectively, this doesn’t impress me much; largely the rules were about convincing people to refrain from doing certain things and enforcing this. It’s less clear to me that the US could, for example, mobilise an additional 10 million military personnel as it did over the course of WW2.
If I’m focusing on war scenarios here, it’s because the possibility of a war with China looms large here. While the opening days of any such war will draw on stockpiled munitions, in any prolonged conflict the US will be sorely tested in its ability to rapidly regenerate stockpiles and replace losses, especially of surface combatants.
I’m eager to have my pessimism here overruled, but there are times when the tide goes out and you realise which states have been swimming nude, and I worry the US isn’t wearing trunks.
Discussion starter, but something I'm sincerely interested in and don't have strong opinions about: do modern Western states (e.g., the US, UK, Japan) have more or less state capacity than they did 20, 40, 60 years ago?
The concept of state capacity seemed to enter mainstream geopolitics wonkery about a decade or so ago, and I find it very useful. I'm sure most of you have heard of it, but in short it refers to the ability of the state to accomplish its policy goals through the use of military, industrial, infrastructural, economic, and informational resources. Each of these is important, but I'd flag that informational resources have a special role insofar as they directly feed into the efficiency by which other resources can be deployed for ends. For example, a piece of infrastructure like a new dam or a rail network may advance policy goals or it may be a waste of time and money, and informational resources will help the state predict which will be the case.
Two other key points to note. First, state capacity of course does not only refer to internal state capacity (i.e., resources proper to the state), but also the ability of the state to persuade or coerce domestic non-state actors such as corporations to co-operate with the state's goals. Most of the major players in WW2 - Britain, the United States, but also Germany and Japan - drew most of their state capacity from these more indirect mechanisms. Second, state capacity is hard to directly assess for the simple reason for it is a fact about potentiality rather than actuality: outside of wars or similar crises, there are good reasons both political and pragmatic for the state not to use the full force of its coercive power.
Recent or ongoing test cases for state capacity in the West include the COVID pandemic, ramping up of basic munitions production like 155mm artillery rounds (especially in Europe), and the new vogue for industrial policy in critical industries like ship-building in the US. My gut instinct is that right now, state capacity in the West is historically at a very low ebb, possibly lower than it has been for more than a century, and that this may be helpful for understanding the behaviour of governments. However, I don't have strong confidence in this assessment, and would love to hear what others think.
Cf. the hygiene hypothesis. I think there’s a good case to be made that having early exposure to a representative range of evolutionarily relevant stimuli helps individuals to calibrate in multiple domains. If you never have anything concrete and immediate to stress about (eg, periods of food scarcity), then your “stressful event” hedonistat doesn’t have a clear signal, and ends up calibrating in a more stochastic way to regard commonplace stimuli (eg someone being rude to you at the coffeeshop) as threatening.
I suspect one reason this might not show up in the data (or be argued for by academics) as much as it should is the confound from heredity. Yes, if you look at modern American kids who are exposed to trauma, you’ll probably find less well-adjusted adults, but that’s because a huge amount of the potential trauma in your critical windows of development comes from your parents and immediate family, and if they’re fucked up, it raises the chances you will be too. I think this helps explain why eg WW2 concentration camp survivors often go on to live happy lives, in seeming contradiction to the modern narrative that even isolated traumatic experiences fuck you up. Maybe also explains why PTSD is a relatively modern phenomenon in warfare, or at least a hell of a lot more common than it used to be. If you'd had a sibling or two die in childhood and friends die in everyday violent altercations, then maybe a battle is less likely to traumatise you.
Of course, there’s also the chronic/acute distinction. If you’re abused by a primary caregiver throughout childhood, that will also lead to long-term miscalibration of your hedonistat, because most humans have historically been reasonably good at looking after their kids.
One issue that’s lurking in the background of your post is that most parents in the West massively overindex on the marginal impact of parenting, largely due to major misconceptions about the relative contributions of genes versus environment. The best act you can ever do for your kids is picking a high quality spouse to have them with. As for parenting time… did you know that contrary to the public handwringing about kids growing up zombified by TV and YouTube, parents in the West today spend roughly twice as much time with their kids as parents did 50 years ago?
While you can definitely fuck your kids up, there’s minimal difference on children’s outcomes between good and great parenting (though there’s a caveat here I’ll come to in a second). This is the whole thrust of Bryan Caplan’s book Selfish Reasons to Have More Kids and I agree with his big picture.
I honestly think most kids today are horribly overparented, mostly to the detriment of their parents’ free time, but also a little to the kids’ detriment insofar as they don’t acquire autonomy and self-confidence as easily. I have lots of friends whose lives have been transformed by having kids in ways that don’t seem very fun. They live by a rigid schedule of violin practice, swimming lessons, reading hour, and so on, and they don’t have any spontaneity. My wife and I by contrast are lazy as fuck parents. Our kids learned early on that “mum and dad have their own lives and priorities” and they should too. Obviously we cook for them and clean for them and have fun family days out, as well as lots of nice spontaneous family time, but my average day isn’t drastically different from how it used to be before the kids arrived. Other parents are often amazed at how much free time we seem to have and honestly a big part of it is because when my wife and I are watching a movie we just tell the kids to buzz off and go for a bike ride or read a book.
To go back to my point about overindexing on impacts of parenting, I think the real problem is that people are over indexing on the wrong thing, long-term childhood outcomes, with empirically dubious motivation. Instead life is much better if you prioritise “how can my spouse and I and the kids have a fun chill time.” Of course that assumes you’re a competent adult whose idea of a fun time isn’t shooting up fentanyl or getting blackout drunk every night but with that proviso I think it’s a good parenting mantra. And I also think if people could learn to just relax about parenting rather than treating it like another demanding job then that would help at least slightly boost TFR.
- Prev
- Next
A true ragged-trousered philanthropist.
More options
Context Copy link