site banner

Culture War Roundup for the week of June 2, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Schelling points of the online right and occasionaly irrational rationalists.

In many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army.

  • Mencius Moldbug

You go online today and you see quite a bit of absurd garbage that is supported by the online right, while they call their left-leaning counterparts out. The catalyst for me was the backlash finasteride got yesterday upon my reactivation of Twitter.

Finasteride is usually taken as a 1 mg oral pill. It is a 5-alpha reductase inhibitor used to successfully treat male pattern baldness in men. Your body's DHT levels change, which is fine if you have gone through puberty successfully. The drug was originally used for men with prostate issues and accidentally ended up being the single most effective intervention for male pattern baldness, even more so than its more potent cousin, dutasteride. The side effects can be quite strong: lower libido, extreme cases of ED, and mood swings; many men need what is called post-finasteride therapy. Right-wing faux masculine bros call for the companies making it to be charged for crimes against humanity. The funny thing is that the number of people who get side effects is close to 2 percent or less, depending on what study you choose. In fact, it is safer and has fewer reported side effects than many medications people take daily. So why are people lining up against a drug that is not just safe but is a damn near modern-day miracle? Nothing can stop male pattern baldness the way it can, so much so that minoxidil, a medication used for promoting regrowth, is useless without it, as you will keep losing more hair than you gain. Hair transplants, by the way, mandate you hop on the same two drugs, if not more, so that you save your remaining hair.

Seed oils are oils extracted from seeds of various plants. They are very cheap, and the fast-food industry uses them a lot because of that. Anyone not living under a rock must have heard reasons not to use them. Butter, ghee, lard, olive oils—all oils with higher amounts of saturated fats—are much better by all accounts; even "stats bro" and "IQ-denying" online bully Nassim Taleb swears by them. Yet the data on this is pretty unfavorable. Now, I am a twig compared to what I wish to be, so I will share what the people over at Barbell Medicine feel, doctors who have really high totals in drug tested powerlifting. They state that every single paper they came across showed that replacing these "better" oils with seed oils produced much better health outcomes.

I use both. I hopped on finasteride three years ago, and my family has been using seed oils since my grandfather's heart attack. I am willing to ditch both if that is the way. Yet, if you press someone on the online right who swears by the benefits of "sun and steel," they would probably state that both might be fine but some have had terrible experiences with both. Hence, the crusade against them at least allows people to not feel alone when they question the validity of what "science" has to say. Plenty of studies, papers, and people are simply incorrect. You will never see a large-scale study that gets public eyeballs which presents group differences as being innate. Hell, the good folks over at ScienceBasedMedicine go out of their way to lie about "science" when it comes to any leftist values. ScienceBasedMedicine is a popular skeptic blog that did its best to be as neutral and was fairly rigorous. Their contributor, Harriet Hall, another person who is not a rabid reactionary, faced scorn for a milquetoast review of a milquetoast book that states very obvious things about transgenderism. The entire blog went into a lefty purity spiral and has pushed out the kind of stuff you would expect from Jezebel on the issue.

So, the authorities are wrong on a lot of things. The world is indeed run mostly by leftists, and science is just a thing for them to justify their holy cows and why they must not be questioned. People here already know this part, but I try to provide more context for newer "mottizens." This goes deeper, which is why I brought up faux masculinity. War is the ultimate masculine experience, with the ability to exert power being a near equivalent or might even be something that surpasses that. The online right (including me) lacks both. Man wants people with whom he cooperates, to feel like he is a part of a clan, and these memes like seed oil hatred and finasteride fear-mongering are no different from the conservative ones (like living in some ranch with a podcast setup where you talk about guns and Black Rifle Coffee whilst shilling for Israel) or the lefty ones where you deny basic human nature in varying degrees.

You also have a rationalist counterpart for this, which is AI hype, wherein people write literal sci-fi pieces and have a view of AI that people who worked on it mostly did not, and many still don't. Scott Alexander is a great guy; his work is responsible for what we are here. Reading his 2027 piece made me feel a bit odd; the man who posted the most well-thought-out takes on medicine and personally helped many wrote something that is flimsy at best. Gary Marcus wrote a decent critique of it (he can be an asshat but is right here), and 4chan's /g/ largely agreed with it. After all, LLMs have in fact slowed down in terms of progress. Anthropic's CEO has been warning us about AI taking away all jobs in 2 years since 2023 at least, much like self-driving cars. The progress has been remarkable, yet the hype around it is has not paid off till now. Jeremy Howard, who wrote ULMFiT (one of the most important papers in NLP according to many, so much so that transfer learning for ChatGPT was inspired by it), simply laughs at statements about AI taking away all jobs, publicly claiming that we are as far away from ASI or AGI now as we were 20 years ago. I am a novice coder; my friends who do write code usually come out feeling angry when they use LLMs for their coding work, despite being proficient at using DSPy and prompts in general. The average person on this place, or ASC, or LessWrong, has a late 130 IQ, with people who write code making up a big part of the reader base, yet many seem to not want to change their beliefs about what I just listed.

Schelling points are clear to see for an outsider; the weirder they are, the more visible they are to them. Though once you are in a group, your worldview indirectly changes a little to match your clan's. Many hackers in 5-10 years' time would probably admit that the podcasts that host people running firms that make them money would in fact want more hype as they make money from their product. People want to be a part of something; humanity is not an island. I bought plenty of stupid, outright lies during my time working with a co-founder who is clearly in need of psychiatric intervention. I would buy it fully, like I bought the lies of a religious sect before it. The rationalists or the online right are not bad people; these Schelling points are kind of benign. In the case of rationalists, it's not even a point as major as seed oil disrespect among the "bronze age warriors," yet as a person on the fringes of both, it was funny that they would both go to great lengths to keep their holy cows alive. 4chan's /g/ is a toxic place full of bitterness, but their dismissal of the 2027 AI predictions and the amount of belief in our ability to produce synthetic intelligence many on LessWrong believe was not off the mark. I really do like LessWrong's stuff; their pieces on things beyond AI and many on AI are worth reading, and SSC inspired the one place I like visiting on the internet and have benefited a lot from. Yet, I am willing to eat downvotes and get blocked by people for pointing out things that I know are likely false. LLMs may take away all jobs, fin and seed oils might make me a beta soyboy and we may need to accept that singlualrity is upon us, yet I will bet against all of that for now, not because I am a contrarian but because I dont want to blindly accept memes that are probably wrong.

edit - typos

Jeremy Howard, who wrote ULMFiT (one of the most important papers in NLP according to many, so much so that transfer learning for ChatGPT was inspired by it), simply laughs at statements about AI taking away all jobs, publicly claiming that we are as far away from ASI or AGI now as we were 20 years ago.

I looked at the clip where he says this, he says there's no more reason to think ASI is close than there was 15 years ago. He says people are fooled by the interface changing from computer-friendly to human friendly, our brains think it's qualitatively different.

The man is fundamentally unserious. I don't care what papers he's written or what expertise he has. It doesn't matter if Major-General Augustus Smythe fought against the whirling dervishes of Sudan with distinction, if he thinks a bayonet charge is going to beat a machinegun, he's a fool. Augustus Smythe doesn't really think this, it's more that he looks down upon all this low-class and crass engineering taking the limelight from his glorious, romantic cavalry regiments. He's not going to actually charge a machine-gun nest with his saber, he's not really confident in what he's saying. Jeremy Howard is no different, he admits the progress on benchmarks and the progress of recent years albeit in an understated way. What benchmarks of AI coding were there 15 years I wonder? He doesn't truly believe the nonsense he's saying, he wants to express a sober, mature, classy, balanced position like Yann LeCun and the others. It's a reaction to style and taste rather than anything substantial. Gary Marcus does the same thing and is infamously wrong in so many of his predictions.

It's perfectly understandable to oppose the nerdy, icky AI doomers or eager non-technical singularitarians or the slick, snake-oil-seeming marketers. It's very seductive to be 'the adult in the room'. But you can't let that get the better of you and mislead people on a very serious matter.

There is a qualitative difference, a stark and obvious qualitative difference to asking a question and getting an immediate answer from an AI, not just in a single domain but in so many domains, at considerable depth where the 'question' might be laying out the setting for a fictional universe and the 'answer' could be a thousand words of a story. It is obvious that huge strides towards superintelligence have been made since 2010. Coding, vision, reasoning, plotting, extended pursuit of abstract tasks... ASI is much, much closer than in 2010, where all there was was IBM Watson and Siri.

After all, LLMs have in fact slowed down in terms of progress.

No they haven't. The shift to reasoning models happened 6-9 months ago. The new R1 came out a matter of days ago. It's super cheap and a massive leap up in writing, I haven't even tried it on code since Claude is so good. 'Progress is slowing' is ironically enough a real mental illusion as old benchmarks are saturated. There's been next to no progress on MMLU because we've moved on from that to new challenges.

Edit - I'd like to retract my statements about ai safety, I'll pen a clear criticism, my comments are not very coherent. Apologies.

The amount of absolute benefits provided by these language models is high but it's miniscule compared to the constant hype they get.

Jeremy Howard is not an unserious person, he may be cavalier but he has actually changed machine learning for good, Dawn bench, his work on kaggle and ULMFiT are impressive feat which are much more serious than most ai safety people who couldn't train a simple model with llm assistance of their lives depended on it.

I see the constant obsession with ASI as a religious obsession, not one steeped in rationality as many rationalists are simply rationalising what they believe in.

I was using Google reasoning model yesterday fo a beginner exercise in processing where it got basic beginShape arguments wrong despite me linking it to the repo. Things are better, I'm not sure if we are closer.

Language models spit out text, they aren't a different form of human cognition. I would change my tune if the ai safety people and those making money from these models were to stop with constant lies and plot out a decent graph that does not assume scaling beyond a point (including reasoning).

I don't want Sci fi stories and podcasts where where you keep quoting the same three people. It's egregious, you have amazing tech but you're using it to talk about hypothetical scenarios right out of a movie.

Garry Marcus has been wrong about neural networks since forever but his post on the benchmarks floating around isn't completely fake. We know how transfer learning works, we also know how transformers work, why should we believe that something magical may happen such that they keep progressing super fast (which they aren't as even top of the line models get plenty wrong despite using dspy).

I'm not an insane ai skeptic, my bachelor's thesis was on machine learning with graphs, I'm a noob but I'm not a complete illiterate. You attach a sane conclusion (llms are getting better) with equal parts insanity (they will come up with ways to automate away research or engineering tasks) that is hilarious.

I may very well switch to more ml focused things after my sabbatical ends but it won't be out of religious fervor. I like computers, I liked seeing my computer spot birds from photos but it's not the coming of ASI and that's not why I will do it either.

they will come up with ways to automate away research or engineering tasks

This is already happening. Papers have been published on it! This is partly why the AI safety people start to sound so deranged, because people are confusing reality with science fiction, not the other way around.

Research and engineering is being automated, piece by piece. R1 can write helpful attention kernels: https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/

Also consider this paper:

Many promising-looking ideas in AI research fail to deliver, but their validation takes substantial human labor and compute. Predicting an idea's chance of success is thus crucial for accelerating empirical AI research, a skill that even expert researchers can only acquire through substantial experience. We build the first benchmark for this task and compare LMs with human experts. Concretely, given two research ideas (e.g., two jailbreaking methods), we aim to predict which will perform better on a set of benchmarks. We scrape ideas and experimental results from conference papers, yielding 1,585 human-verified idea pairs published after our base model's cut-off date for testing, and 6,000 pairs for training. We then develop a system that combines a fine-tuned GPT-4.1 with a paper retrieval agent, and we recruit 25 human experts to compare with. In the NLP domain, our system beats human experts by a large margin (64.4% v.s. 48.9%). On the full test set, our system achieves 77% accuracy, while off-the-shelf frontier LMs like o3 perform no better than random guessing, even with the same retrieval augmentation. We verify that our system does not exploit superficial features like idea complexity through extensive human-written and LM-designed robustness tests. Finally, we evaluate our system on unpublished novel ideas, including ideas generated by an AI ideation agent. Our system achieves 63.6% accuracy, demonstrating its potential as a reward model for improving idea generation models. Altogether, our results outline a promising new direction for LMs to accelerate empirical AI research.

Are there caveats on this? Yes. But are AIs running AI research hilarious? No. Nothing about this is funny or deserving of casual dismissal.

I think that likening the rationalists treatment of AI to the anti-finasteride crowd is a bit unfair to the former.

Now, AI has been a theme with rationalists from the very beginning. It would not be totally unfair to say that our prophet wrote the sequences (e.g. A Human's Guide to Words) as an instrumental goal to be able to discuss AI without getting bogged down in pointless definitional arguments. That was almost two decades ago, in the depth of the AI winter.

Scott Alexander wrote about GPT when it was still GPT-2, it was the first time I heard about it. It is fair to say that AI is the favorite topic on LessWrong, with Zvi minutely tracking the progress with the same dedication previously allocated to COVID. Generally, the rationalists are bullish on capabilities and bearish on alignment. But I feel that Eliezer's "dying with dignity strategy" haha-only-serious April's fool is overconfident in a way which is not typical of LW. In practical terms, it does not matter much if you think that p(ASI) is 0.15 and p(doom) is 0.1 or if you think they are 0.95 and 0.9 respectively.

We do not have a comprehensive theory of intelligence. We have noticed the skulls of the once who have predicted that AI would never beat a chess master, succeed at go, write a readable text, create a painting which most people can not distinguish from a human work of art and so on. This does not mean that AI will reach every relevant goalpost, reverse stupidity is not intelligence, after all.

We are in the situation where we observe a rocket launch without the benefits of any knowledge of rocketry or physics. Some people claimed the rocket would never reach an altitude of more than twice its own length, and they were very much proven wrong. Others are claiming that it would never reach 1km, and they were likewise proven wrong. From this, we can not conclude that it will obviously accelerate until it reaches Andromeda, nor can we conclude that it will not reach Andromeda.

Wrt the AI 2027, the vibes I remember getting from browsing through it is that it mostly Simulacrum level two, and came across as the least honest things which Scott ever co-authored. The whole national security angle is very much not what keeps LW up at night -- if China builds aligned ASI, they have a whole light cone to settle. What will happen to the US will just be a minor footnote in history. But the authors recognized that their target audience -- policymakers in DC -- will likely be alienated by their real arguments about x-risk. By contrast, national security is a topic which has been on the mind of the DC crowd for a century, so natsec was recruited as an argument-as-a-soldier.

My unpopular opinion on anything ai safety or alignment aligns with a top level comment a few weeks ago and the general skepticism some have worded out very well here.

The religious fervor around this seems pretty irrational with Scott getting people over at slatestarcodex calling him names for this. I never read Yudkowsky till a few days ago, a lot of what he's said and his arc in the past two decades makes me not take him seriously. I wish I were as harsh on him as hacker news or others here.

My unpopular opinion on anything ai safety or alignment aligns with a top level comment a few weeks ago and the general skepticism some have worded out very well here.

Well, I am rather sure that there is a great rebuttal to these arguments somewhere on Less Wrong, so I remain unconvinced.

The religious fervor around this seems pretty irrational with Scott getting people over at slatestarcodex calling him names for this.

Well, the version of Pascal's wager offered by the AI safety people is that (1) the current AI boom might lead to AGI which is much smarter than humans are, and (2) that aligning such AI systems will be hard. You assign a probability to both of these, then multiply this by the QALY cost of killing all humans (or an even higher cost if you care about humanity's far future, which many do), and you get a number how seriously you should take AI x-risk.

Of course, you can simply pick your AGI probability to be 1e-50, but then I might claim that you are overconfident, and ask what other past correct predictions you have made which might make me rely on your predictions instead of everyone else's.

If you pick 1% for both numbers, then a one-in-10k chance to wipe out humanity still seems like a big fucking deal.

In Scott's last 24 non-OT posts on ACX, I have counted four AI stories (2x geoguesser, which is more "AI as a curiosity" and 2x AI 2027, which is more doom and gloom). While I am sure that some of the ACX grants go to AI safety, he is also funding plenty of other projects, which would be totally irresponsible if his p(doom) was 0.9. If this is him showing religious fervor, it is not very convincing.

I never read Yudkowsky till a few days ago, a lot of what he's said and his arc in the past two decades makes me not take him seriously.

I will concede that AI alignment is his pet thing much more than it is Scott's, and as of late he has been very bullish on p(doom). Still, I have found him to be a smart, engaging writer. Most of the ideas from the sequences could also be picked up elsewhere, but he did do a great job of communicating all these ideas and putting them in one place. For some light reading of his, see if you like HPMoR.

By and large, the ratsphere does not share his high confidence on p(doom), I think, because they were trained by their prophet to update based on the strength of arguments, not to blindly follow their prophet.

If it's so serious, stop writing puff pieces and start doing something more. You write a poorly written sco fi story that you want others to take seriously if anything there remotely ever comes true but don't want it to be criticised with the same level of rigor. This is assymetric. I recently saw a terrible mission impossible movie that made me scream internally due to its poor handling of basic computer tech and rogue AI. People should talk about doom but at the same time they happily go to meet ups with people in AI who develop these models they warn us about.

Of course, you can simply pick your AGI probability to be 1e-50, but then I might claim that you are overconfident, and ask what other past correct predictions you have made which might make me rely on your predictions instead of everyone else's

If I bring up yuds record, I would be far more correct than he was since I always claimed that language models will be hype puff pieces if you compare them with the attention they get, they're brilliant otherwise but they're not the same level of technology as nuclear weapons.

The meandering posts by all AI safety folks fail to offer concrete plans either, they are not the first ones to suggest having defensive options for rogue technologies. People on this forum commented plenty about this being true a two weeks ago.

By and large, the ratsphere does not share his high confidence on p(doom), I think, because they were trained by their prophet to update based on the strength of arguments, not to blindly follow their prophet.

This makes me a little melancholic as Scott's writing and opinions post doxxing make me less lose respect for him. Unpersoning people, voting to keep the same order in place that wanted his life wrecked, going on podcasts and making claims about OpenAI being able to buy out all car manufacturers in the US for making killer robots are childish even if these are hypothetical.

The entire subreddit over at /r/slatestarcodex has rosy takes about his Sci fi piece, he got called out upon this clip surfacing, many there seem to be have been traumatized by his fictional story.

In the case of rationalists, it's not even a point as major as seed oil disrespect among the "bronze age warriors,"

I think youre off on that. Both groups have people who are really into that thing, but those people are much more central to rationalism. I remember a compatriot who would drink pumpkin seed oil (its a thing in styria) neat as countersignalling and he never had problems.

Also, I think going bald is actually not the end of the world. I would on balance advice not messing with your hormones over it, unless youre 20 or something.

Going bald isn't the end of the world but it's pretty traumatic. People get hair transplants that look terrible or wear a hair systems to cope with it.

If you take fin and post on twitter, people will simply start calling you less of a man since for them, you might as well be on your way to castrating yourself chemically.

Going bald isn't the end of the world but it's pretty traumatic. People get hair transplants that look terrible or wear a hair systems to cope with it.

And some people accept their fate and choose to get jacked. Because fat and bald sucks. Fit and bald is a definite look though. All in all I'm pretty happy with my choice.

It's completely fine to go bald, a book that impacted me a lot, Tyler digest, has a post titled points of change where Owen Cook aka RSD Tyler willingly goes bald after realising that him telling people "looks don't matter" is hypocritical if he takes Finasteride and minoxidil.

Better to choose strength over weakness, I'd still state that fin is totally safe for most in case someone's concerned about male pattern baldness. Just my opinion.

I have two datapoints about AI and programming recently.

  1. I asked it about an unknown PRNG function I've reverse engineered which I had previously tried googling to see if it was based on a standard function. It was able to find functions that were similar that I had not previously been able to find googling. I then asked it to come up with a known plaintext attack when part of the seed was known and it spat out something that looked correct.

  2. Another developer was looking at reverse engineering a function that was protected with a weak form of control flow obfuscation. The control flow obfuscation was just replacing function call instructions with a call to a shared global dispatch function that would end up calling the target function. The global dispatch function would execute approximately 200 instructions. There is an obvious attack against this obfuscation and it can be stripped off with ~100 lines of python in ghidra. They were using LLMs to try and investigate this function but didn't make much progress. But maybe with better prompting and allowing more access to tools it would have been possible for the LLM to make progress.

Most of these guys are alternative media and make their living at least partially from their online writing. As such, it’s not surprising that they’re adopting the opinions of their audience, at least publicly. If my audience is full of gymbros, I can’t keep them reading if I’m going against their long-standing belief that seed oils are poison. So I might choose to be silent, but it’s in my interests to let it be known that I think seed oils are bad.

Which is fine, I am happy to change my mind, I just dont like schelling points that are hard to defend but at the same time will get outcasted. I know people irl who could have saved thier hair had they jumped on fin but now have the hair of a 70 year old thanks to friends who in good nature refused to let them take it as they were told by other bros about it online, even though my dermat is a guy who lifts and is a semi pro athlete/academic prodigy.

Its also an all consuming thing, seed oils/fin apparnely cause all health problems and therefore must be crusaded against, you are a shill if you say otherwise. Seems excessive.

have the hair of a 70 year old

Wanting to keep all your hair is perfectly fair, as much as dyeing your hair, shaving, or any other aesthetic change to how you choose to style your hair. But I dislike this framing/phrase. Male pattern baldness is natural, and not a sign of aging or decrepitude - if anything it's a sign of virility and maturity.

How's it a sign of virility? Maturity, certainly. A physical change that's associated with aging is mostly not a sign of a virility as older people are less virile. You can be old or bald and have a lot of vigor, not denying that, I'd want to age well, hope others too.

Plenty do stop the decay but male pattern baldness isn't perceived well by most people, men included, which is why millions take medicines daily to keep whatever hair they have left.

How's it a sign of virility?

In the most literal sense: it's associated with high levels of male hormones ("Men with androgenic alopecia typically have higher 5α-reductase, higher total testosterone, higher unbound/free testosterone, and higher free androgens, including DHT"). TMU it's a completely different phenomenon from hair loss in the elderly. In many young men's case, balding at the apex of the skull occurs concurrently with facial hair and body hair growth - in a very real sense it's another side of the same coin. The fact that we've come to associate it with old age and feebleness is just one of those things where cultural beauty standards have diverged from the biological reality of the human phenotype, like women having body hair, and I just think it's a bit silly in principle.

Not sure about the phenomenon but I'd guess you're not correct as the same meds are handed out and work well for all age groups, hair loss mechanism remains the same.

A guy in his 20s has better hormonal profile and higher virility, regardless of his genetic makeup. Most people on finasteride are in their 30s or beyond as until your mid 20s, the effects of male pattern baldness aren't as apparent.

I didn't post the clean shaven vs beaded soyjak meme but it does apply. The same guy who has higher levels of dht will have lost far more hair at 28 than at 48. Is he more virile at 48.

Vikings had long braids, so did the Indo Aryans, Eastern European pagans, plenty went bald then too, probably the same rates but young men don't have massive fiveheads by default requiring them to style their hair like Stallone before his transplant.