@problem_redditor's banner p

problem_redditor


				

				

				
6 followers   follows 8 users  
joined 2022 September 09 19:21:08 UTC
Verified Email

				

User ID: 1083

problem_redditor


				
				
				

				
6 followers   follows 8 users   joined 2022 September 09 19:21:08 UTC

					

No bio...


					

User ID: 1083

Verified Email

How do wokes/social constructionists/etc reconcile their views with the actual state of scientific knowledge or even basic logic? It seems clear to me that if one accepts genetics and evolutionary principles, it necessarily implies that 1: humans have a nature that is determined in large part by our genetics and 2: humans and human societies undergo selection on both an individual and group level. We've known for a long time now that intelligence, mental health and a whole bunch of other traits relating to ability and personality are very heavily influenced by genetics, and it's perfectly logical this could lead to differences in outcomes on an individual as well as population level.

However this gets dismissed away with a lot of spurious reasoning (which is usually presented with a huge amount of nose-thumbing and "Scientists say..." type wording in order to scare the reader into not questioning it). As an example, the whole "races can't be easily delineated, there's no gene specific to any race, and there's more variation within races than between them" argument seems to be a poor attempt at deflection and simply doesn't hold up as a method of dismissing population-level differences. Just because races can't be easily delineated does not mean that race is a "social construct" - race might not be discrete, but it is a real physical entity with roots in biology and just because there's no clear dividing lines which can be drawn doesn't exclude the fact that if you do decide to draw these lines it's entirely possible you'd find differences which exist. None of what's said is inconsistent with the idea of innate variations in intelligence and ability that roughly correlate with observable phenotypic traits. All it takes is for the frequency of specific alleles which code for these traits to be unequally distributed, and you'll find aggregate differences. But the way it's presented exists to mislead people into thinking that the continuum-like nature of genetic differences means that these differences or even the concept of race itself as a biological entity is not something that one should even entertain.

There is also another level to this denial of evolutionary principles that extends far beyond genetics, however. Many of these people also seem to think that social norms themselves are arbitrary vagaries of specific historical circumstances, rather than being adaptive practices which were selected for through the process of survival-of-the-fittest. This view fails to account for many commonalities among civilisations, one of the clear ones being religion (one of the favourite woke whipping horses out there). Not only is religion completely ubiquitous in pre-modern society, you can generally see a shift from animist-type religions in tribal societies to the more developed and organised forms of religion mostly predominant in societies that achieve "civilisation" status. This clearly seems to suggest that religious dictates don't simply arbitrarily drop out of the sky - it indicates that some form of selection was occurring and that societies that adopted certain religions had an advantage. Even more than this, these "successful" religions that are common in civilisations share quite a few similarities in their dictates - selflessness, self-discipline, abstinence, etc.

I'm no religious nut - I'm quite atheist, but religion is a social technology that exists so that large-scale societies can remain cohesive and retain a shared moral foundation, and I would call it a good thing overall (and yes, my perspective often pisses off both religious people and atheists). However this is never properly engaged with by the orthodoxy outside of "yeah people facing hardship make up bullshit to make sense of the world, it's got no validity or use outside of that". Such stock explanations that handwave away traditional social norms (at least, those which contradict the woke moral system and outlook) as being functionless at best and damaging at worst are painfully common, despite many of these social norms being absolutely everywhere up until recently.

Among the supposedly educated any discussion of these topics through these non-approved lenses tends to invoke accusations of "social Darwinism" with the implication that applying any kind of evolutionary logic to humans and human societies is invalid because it could be used to justify Bad Things. This is all consequentialist reasoning which has no bearing on the truth of the claim itself, and lumping in all kinds of belief systems under the same category is a very clear composition fallacy which is clearly done to tar every single idea contained within its bounds with the same brush.

More than this, despite these people being very intent on portraying themselves as secular, scientific people, their viewpoints clearly are in conflict with any kind of scientific understanding and come off to me as being borderline superstitious. In order to strongly believe that insights from genetics and evolution can't be applied to human behaviour and that humans do not come programmed with specific predispositions that depend on what you've inherited, you have to believe in metaphysical, dualist ideas of the mind which are essentially detached from anything physical that could be affected by genetics. Once you adopt a view of the human mind as a physical entity the shape of which is determined by the specifications of genetic instructions, it opens up that whole Darwinian can of worms and everything that stems from it, and many wokes simply do not want to acknowledge the possibility that it could have any amount of validity. Unless they're able to maintain an absolutely unreal amount of cognitive dissonance, I'm unsure how their ideas can be anything but superstitious.

It's even worse when it comes to their idea of social norms as something that just drop out of the sky and persist and propagate over the long term regardless of the adaptiveness of these norms, since there is clearly nothing controversial about the idea that societies compete against each other, and this will tend to select for those norms that promote functioning (which is why you find common threads). But you still come across this type of knee-jerk denial nevertheless. Regardless of how well-read they may be, their reasoning remains fundamentally sloppy, and I'm unsure how they manage to square this circle.

I sometimes vote if someone has written something I think is insightful. So I do cast votes very occasionally, but they're virtually all upvotes - I basically never downvote or report people for that matter.

I'm aware, I'm just being facetious - rather, I'm pointing out that there are a lot of Aphex Twin tracks which are probably not suitable for small children (something which I assume the contents of the links I provided would immediately make clear).

That's one of my favourite artists and is certainly suitable for children. After the RDJ album, I recommend showing them the accompanying EP, Come To Daddy, and the title track's music video.

I don't do these too often because they get extremely boring after a while and I eventually stop putting effort into guessing the word, but here's my attempt at Saturday's Wordle. Couldn't get my formatting to look like yours, but I've done my best:

Wordle 553 4/6

⬛⬛🟩🟨⬛

⬛🟨🟩⬛⬛

🟨⬛🟩⬛🟩

🟩🟩🟩🟩🟩

My guesses, in chronological order:

ADIEU

WEIGH

SLIME

POISE

Image for proof:

https://imgur.com/a/S3dnt98

And who set the high bar for the amendments to pass and the very process? Legitimacy is derived only by how things are perceived by the populace. There is no other way for it be derived. If enough people believe the 2020 election was illegitimate then it was. There is no objective measure of legitimacy, other than how people feel about it. There is no outside force than can determine if the people see something as legitimate or not.

Yes, I essentially agree with this. The legitimacy of the Constitution isn't a fluffy subjective thing that can simply differ from person to person. Legitimacy is a phenomenon which is determined by the beliefs of the society as a whole. And in a scenario where people do see the Constitution as illegitimate, I see nothing preventing them from outright drawing up another agreement. It's happened before and can happen again. The fact that people generally have chosen to remain with that system seems to suggest they see merit in it, no?

238km/s is not highly relativistic. Also it would be silly to travel at 99.9% the speed of light when you could travel at 90% for a tiny fraction of the energy and risk and get there less than 10% later. The only reason to do it would be so that less time passes for your travelers -- but if it's a self-repairing box of electronics and robotics, engineered by a galaxy-brain superintelligence, it can probably while away the millions of years without issue. There are no primates on board who are aging, nor even who are consuming energy to maintain.

I'm aware 238 km/s isn't highly relativistic, travelling slightly above that speed just means the probe will spend a painfully large amount of time cruising. And even non-relativistic travel poses issues. For example your probe is going to encounter the harsh radiation environment in space, even if it's not travelling at relativistic speeds (if it is it's much worse). Shielding could be possible if one was willing to add to probe mass, but if it fails to block all of the radiation it's going to be exposed to that for the entirety of the journey. This is fine when your mission duration is short. It's less fine when your mission duration is millions of years and your probe contains lots of delicate electronics that need to work properly.

Self-repair is hypothetically possible, but that requires usable energy and matter, and interstellar and intergalactic space is famously devoid of both of these things. And the longer your mission is, the greater the chance of a critical failure at some point. Even if that chance is small, if you're going to take millions upon millions of years to get there most of your probes might not reach. Travelling slow comes with its own costs and impracticalities.

And yeah, I know every single one of these problems can be solved by invoking [hypothetical future technology], and I'm sure the future will unceremoniously spit in the face of any prediction I make, but I'm not too convinced by any explanation that relies too heavily on handwavium.

And you can send a lot of probes -- depending on how small they can be and how efficient their propulsion is, even a very high loss rate can just be overcome with quantity. Another advantage of not having precious primates on board!

Yes, I agree, even with a high loss rate you could spread your probes as long as there's a nonzero probability of survival. As I said, the idea you posited is not out of the question. Of course, then the Fermi paradox rears its head, since not only are we seeing no sign of alien life from our own galaxy, but also from other galaxies and other galaxy clusters which should hypothetically be able to reach us.

Why would the outpost be dead? Galaxy-brained superintelligences don't seem like likely to be mercurial creatures that might just die off one day from a plague or civil war or something. Once they're established, I assume they're gonna be here till the end of time.

I'm not saying it would be dead, I'm more saying that communicating would be full of latency problems - any information you'd get from it wouldn't be timely at all and would be mostly of little value since it'd be impossible to act on. The point was that for all you know the outpost could be dead and you'd only know 11.4 million years later.

I think humanity is going to let go of our sentimental attachment to meat-based life basically as soon as we have a digital alternative

Personally, I wouldn't do it. Even assuming that you can replicate the phenomenon of consciousness in a non-biological substrate (something that could be the case but that's basically impossible to prove), there's the issue of continuity when you're uploading your brain. Sure, there's another version of myself now, but this is not me and I will not experience the change. I will live and die as meat-based life, and there will not be any "transfer" of consciousness. There will not be any change in my own state except now I possess the knowledge that there is an immortal version of me running around out there.

So this is not because I have any attachment to meat-based life - the benefits of a digital substrate would be very tantalising if I could genuinely transfer myself into it. Rather, I think my experience of being me is so intrinsically linked with my physical body that they basically can't be disconnected from each other. The incentive to digitise my brain kind of starts looking very weak then.

but even if we don't, as you say, presumably your Von Neumann probes could build "human manufactories" on the other side of their voyage, even from digitally reconstituted genomes from our local group, in which case I don't see why they'd be any less "ours" than whatever distant descendants clambered off of a successful million year generation ship after it arrived on the other side of the cosmic ocean.

The issue for me is that you don't actually get to colonise anywhere, nobody leaves, you just make another galaxy cluster full of humans. Maybe this is just an irreconcilable values difference, but I think this solution completely voids the point of the exercise. I don't intrinsically care about creating as many humans as possible and distributing them throughout the galaxy. I care infinitely more about where these humans come from.

Anyway... if I'm right about the trajectory of our species, how much of our light cone do you think we could in principle colonize? That's the interesting question IMO.

Let's assume we can go at, say, 50% light speed (149896.229 km/s). The expansion of the universe is 68 km/s/Mpc, and the value of Mpc that gives us a recession speed of 0.5c (the relevant formula here is 68 x Mpc = 149896.229) is 2204.36 megaparsecs, which translates to roughly 7 billion light years. Everything outside that distance is receding from us faster than that.

It's basically Hubble's law. You can take any speed of travel, divide it by the expansion rate, and find the distance beyond which everything is receding from you faster than your travel speed. There's probably additional complexity created by the aforementioned fact that the Hubble "constant" is not actually constant and is decreasing, but I can't be arsed to factor that in right now.

If one paperclip AI starts with access to a nuclear arsenal, and one starts with access to a drone factory, they are going to start waging war in a drastically different way. And the other AI is basically going to interfere with their methods for human extermination.

I'll grant that this might be the case. But if one paperclip AI's method of extermination is more efficient or more conducive towards achieving the goal than the other, I would expect the AI with the more inefficient method of achieving their goals to shift towards the alternative. Without the problem of drifting goals there's no reason why the AIs would not want to maintain some level of coordination since doing so is conducive to their goals (yeah, they might be two separate agents instead of one now, but there's nothing stopping them from communicating with each other every now and then).

Sure, but even allowing for a stalemate condition where neither is destroyed it still sounds to me like quite a lot of resources and computing power spent trying to one-up each other on the remote chance that the other AI "defects" somehow. Does any slight improvement in security from exterminating the other AI outweigh the benefit to your goal from having two agents working on it? And wait, if its goal can drift, why can't your goal arbitrarily drift too? You're cut from the same cloth, and you're just as much a potential hazard to your current goal as the other AI is. If AI is going to be this unreliable, perhaps having more than one AI with the same goals is actually good for security since there's less reliance on one agent functioning properly the whole way, and the AIs that don't drift can keep the ones that do in check.

All this is to say that engaging in war with the other makes sense to me when another agent's goals are in conflict with yours, not when both of your interests are already aligned and when the other agent could help you achieve what you want.

EDIT: added more

I really don't think this is evidence of leniency at all. Firstly, people were arrested on Jan 6th. I've seen a bunch of complaining about how the number arrested was less than the BLM riots, but I'd like to note that police were overwhelmed. "Since the police at the scene were violently attacked and outnumbered, they had a limited number of officers who could make arrests. 'Approximately 140 police officers were assaulted Jan. 6 at the Capitol, including about 80 U.S. Capitol Police and about 60 from the Metropolitan Police Department,' according to the Department of Justice."

It's necessary to remember that Jan 6th was a one time event, too, whereas BLM rioted for a much longer period and it's reasonable that officers would know better what to expect for the latter which would make them more capable of handling the riots and making arrests on the spot, so the two cannot be directly compared in that way. Still, hundreds of arrests were made in the aftermath of Jan 6th. "More than 855 defendants tied to the attack have been arrested in 'nearly all 50 states and the District of Columbia.'"

https://www.usatoday.com/story/news/factcheck/2022/07/25/fact-check-false-claim-no-arrests-were-made-capitol-jan-6/10077303002/

If this is the basis for the argument that the left is being politically discriminated against, I have to say I think it's very weak. There are plenty of factors that can influence police response that have nothing to do with sentiment.

General relativity does allow for FTL in the broad sense of "get from A to B faster than light conventionally could" - the Alcubierre metric and wormholes being the most obvious.

Okay now I'm getting into things I'm not too certain on (obviously IANAP), but from what I understand apparent FTL that entails the warping of spacetime is one of these things that we're not 100% sure is impossible but does pose a lot of problems. Apart from the whole "closed timelike curve" problem that these apparent FTL methods seem to create (which, granted, as you noted one can try to resolve through all kinds of difficult-to-verify chronology protection conjectures), there's also the fact that both Alcubierre drives and traversable wormholes alike require unobtainum exotic matter that at best isn't impossible but there's no evidence for its existence and at worst violates an energy condition.

So they're not exactly impossible per se, but there's reasons to believe they probably are.

Nah, the "reachable universe", while not as large as the "observable universe" and slowly shrinking, is bigger than that (it's something along the lines of a billion galaxies IIRC). The Local Group's only the eventual size of the reachable universe, as t -> infinity, not its current size or anywhere close.

Yes, the reachable universe at the moment isn't only the Local Group. However the size of our reachable universe is premised on the assumption that we leave today, and at the speed of light. What's currently in our reachable universe is a very generous estimate as to what we can practically reach.

In retrospect the way I phrased it was probably misleading - the statement that we might be restricted to the Local Group was my extrapolation of what in practice might be our limit, incorporating my own quite pessimistic estimates as to the difficulty of achieving anything close to relativistic speeds (let alone speeds nearing that of light) as well as the difficulty of keeping a crew alive and the ship working when going at these speeds.

Of course, if FTL is real then many estimates for the size of the universe boil down to "time and/or aliens are the limit, not space". 10^10^10^122 makes exponential growth go cry in a corner.

Given the constraints that relativity imposes, this seems like it might be unlikely absent some revolution in our understanding of physics.

EDIT: added more

The case for an inflection point is pretty strong. It’s my understanding that for objects that have already crossed the boundary of the event horizon, no reduction of the distance between us and that object will occur.

Think about it this way: There are objects far enough away from you that they are moving away at a rate that exceeds the speed of light, meaning without FTL travel they will be receding from you faster than you can travel to them. The space between you and any object beyond that horizon will only increase and the further they go, the faster they recede. If you try to reach it in a relativistic colony ship, all that happens is that you’ll be stranded from your original galaxy group and will never reach the new one as your galaxy of origin passes out of your event horizon. Sure, you are closer to the object and further away from your point of origin than you would've counterfactually been, but that does not equate to closing the distance.

Here is the Plutarch quote you are looking for.

Could this be linked to the Women are Wonderful effect?

There could definitely be some relation, the Women are Wonderful effect itself is a pretty substantiated finding after all (source 1, source 2 for proof) and it's plausible that it has an effect.

And I would agree that the mindset you've outlined ("well, he must have done something to deserve it") is very common.

Like all stereotypes, there is some truth in this and some falsity. It's true that almost all women are unconfrontational and need a lot of provocation to be violence. However, it's also true that almost all men are that way too! Only a small minority of men tend to be violent with little justification. But, as usual in relations between the sexes, minority groups seem to have a disproportionate impact on people's cognition.

Yeah I wouldn't say there's much merit to the stereotype at all. It's actually very possible to flip the argument in the other direction and state that since people are generally averse to hurting women in the first place, if they do so, there probably must be some reason why (note that I do not endorse the adoption of this attitude whatsoever, this is just an argument to show how easily this logic can be flipped on its head).

Regardless of whether behaviours that are protective of women are instinctual or sociocultural (as previously stated I lean heavily towards the former having at least some impact), the unwillingness to hurt women can't just be chalked up to being an artefact of socially desirable responding, since it is also verifiable in experimental, real-world contexts.

The article "Moral Chivalry: Gender and Harm Sensitivity Predict Costly Altruism" details a few small studies concerning the topic. Study 2 is probably the most interesting of the studies to me, because it moves out of the realm of the hypothetical and into an actual experimental situation where participants actually believed people were being hurt. They gave participants 20 dollars, and told them that at the end of the experiment the money they still had would be multiplied by ten-fold. However, they'd have to go through 20 trials where a person would be shocked, and during each trial they could opt to give up an amount of money in order to reduce the shock the target received. They were broadcasted videos of either a male target (Condition 1) or female target (Condition 2) responding to the shock, and the results were:

"During the PvG task, deciders interacting with a female target kept significantly less money and thus gave significantly lower shocks (n = 34; £8.76/£20, SD ± 5.0) than deciders interacting with a male target, n = 23; £12.54/£20, SD ± 3.9; independent samples t-test: t(55) = −3.16, p = .003, Cohen’s d = .82; Figure 2B. This replicates the findings from Studies 1A and 1B in the real domain and under a different class of moral challenge, illustrating that harm endorsement is attenuated for female targets." Note also that the videos broadcasted were prerated by an independent group to be matched across condition, such that both male and female targets elicited similar body and facial pain expressions.

Male robbers downright express a reluctance to target women. "Overall, the men in our sample tended not to target women, or, if they did, they did not admit it. Overwhelmingly, the cases discussed here involved men robbing men or men robbing male/female couples; in the latter case, the robbers focused their discussions on gaining the males 'compliance, not the females'. ... Mark described robbing two females under the influence of an alcohol/valium cocktail. In the interview, he expressed considerable shame for his actions: 'I robbed a girl as well so it makes it so much worse … I was heartbroken … I gutted her … I don’t do shit like that.’ The other male, Thomas, who robbed a lone female, also said that he was ashamed of having robbed a woman. In fact, he went out of his way to suggest that such activities were not typical of his modus operandi: ‘I never done anything like that before, that’s not really me …. I feel terrible that I robbed that woman so I don’t want to talk about it really … I am so ashamed of myself.’"

"A number of other men in our sample offered up explanations for why one should never rob women. In outlining how he chose targets, Mark2 interjected: 'You must be thinking I have no morals. I wouldn’t go out and rob an old person. I would look for a bloke …. It wouldn’t be right to be robbing women and little kids or anything like that.’ When asked if he had ever robbed a woman, John2 replied: 'Yeah, but not violently … generally I don’t want contact with women because I don’t like to be violent with them … I never hit a woman in my life. ’Then he expressed empathy with the potential female victim: ‘It’s just that if it was my mother or sister … it is all right to nick their bag, but not alright to hit them [women].’ Similar philosophies have been described by male street offenders in United States-based studies (e.g. Mullins 2006 ; Wright and Decker 1997)."

Additionally, this study surveyed a sample of 208 Israeli couples examining their tendencies to escalate aggression in eight hypothetical situations where they were provoked. What they found was: Men’s intended escalation to female partner aggression was lower than women’s escalation to male partner aggression. Men’s escalation to male stranger provocation was higher than women’s escalation to female stranger provocation. Men’s escalation to female stranger provocation was lower than women’s escalation to male stranger provocation.

In other words, men, if anything, are actually less willing to escalate aggression with women than women are with men. The results here are congruent with much domestic violence research where results of gender symmetry and often greater female perpetration are the norm in properly-conducted research.

The human brain is a "chinese room".

Not exactly, ChatGPT isn't possessed of "understanding" of textual content like humans are, but it can generate text very competently nonetheless.

Also AI has done many agentic things. Any definition of agentic that would exclude everything an AI has done would be so strict as to be obviously fragile and not that meaningful.

I mean, I agree that the distinction between an agent and automation is a completely arbitrary distinction predicated solely on degree, but the fact remains people don't think of AI as agents in any real sense at the moment. I think as the progress of the field goes on that perception will shift.

Hadn't heard about the 0E0P metacell before, reading about it now and it's certainly cool.

I don't think the original question is fundamentally interesting, tbh - any system capable of universal computation tied to some sort of action will be capable of self-replicating in all sorts of bizzare ways, comparable to turing tarpits.

I suppose the question was less "would there be other usable self-replication methods" - because the answer's almost certainly yes - and more "has anyone else posited one and would that specific system be capable of significant emergent complexity". The question was asked for completely trivial worldbuilding purposes where specific details are crucial - I have a tendency to get bogged down in detail analysis to an unreasonable degree.

As far as I can tell, no one has seriously tackled that question in full - I'm not aware of any paper for now that confidently advances a novel system explaining how an alternative replicator mechanism can be interpreted as instructions for building stuff. The way DNA/RNA is translated into building an organism is a fairly convoluted multi-step process and building such a system for any hypothetical replicator is probably very difficult.

Most of the papers I come across are at the very basic level of "how can a sequence of information robustly self-reproduce and transmit its characteristics in a way that Darwinian selection can operate on it", that additional layer of complexity surrounding translation is unfortunately not touched on (either because it's not part of their intention to create a general purpose replicator, or because they can't propose one).

I'm reading Neven Sesardić's Making Sense Of Heritability (in conjunction with many other papers and blog posts). It's a book that addresses the arguments of anti-hereditarians who claim that heritability is not a good estimate of genetic contribution to variance in a trait because of interactions, gene-environment correlations and so on. Sesardić is incredibly critical of anti-hereditarians, and very good at pointing out the flaws in their reasoning. I'm currently at the part where he explains how the equal environments assumption is tested.

His writing is quite accessible for a newcomer to behaviour genetics, and the book is quite thorough in its scope. I actually think this is a book people who are in any way interested in the topic should read, if they haven't already.

I spent my Easter falling down the rabbit hole of behaviour genetics and HBD after discovering Shaun's video purporting to debunk The Bell Curve.

Fuck.

What were the long term effects of Western colonialism on the technological development and social stability of the societies they ruled over? Are there any sources discussing this in a non-ideological manner? Counterfactuals are generally pretty hard to discuss or explore, but my intuition is that the long term effects of colonisation would have been on the balance positive.

In many of the colonised areas the technological disparity seems obvious - the Aztec and Inca for example completely lacked beasts of burden and did not put the wheel to use in any significant way, they did not have knowledge of advanced metallurgy (the Aztec made limited use of copper and bronze, but never learned how to use iron), nor were there technologies like the printing press etc all of which the Spanish already had when they made contact at the time. In the case of the Inca they simply did not even have a written language to print - and quipu doesn't count as a writing system, the current consensus seems to be that it was simply an accounting system and not a written representation of Quechua. There was a translation of a quipu in the village of Collata that apparently represented information phonetically, but that quipu was made after the Spanish conquest and was likely influenced by contact with them.

An analogous situation is Mughal India, which as far as I know could be described as "proto-industrialised" at best and significantly fell behind Britain in the face of the massive manufacturing boom that the Industrial Revolution brought to Europe (additionally, the Mughal Empire had already begun to disintegrate pretty rapidly from the eighteenth century onwards). And British contribution is pretty visible today even to your average Indian, the Indian railway system being a big example. I'd wager it's pretty plausible that colonisation by a more technologically advanced society generally confers long run material benefits.

I suppose a potential counterargument that could be offered up would be to posit that perhaps their situation would've been better had Western powers not occupied them and traded with them instead, but that argument encounters the obvious issue of the natives perhaps not being able to access these resources - a huge amount of the resource extraction and manufacturing was after all organised and sponsored by Westerners. I highly doubt that, say, South American natives had the wherewithal to build massive gold and silver mines like the Spanish and Portuguese did - production on that scale was probably outside of the ability of even the societies that did do basic mining, like the Inca.

They did in part through judicial activism (per your claim) no? They elected politicians who chose judges to override the Constitution in your framing.

They did, and then they elected politicians who chose judges to override the judges that overrode the Constitution. By this logic, the Constitution has been upheld and Roe v. Wade is illegitimate.

Of course, there's questions to be raised about how much the decisions of Supreme Court judges actually represent the public (not least because how they are chosen is one level removed from the general public's vote), but if this argument is to be made, surely the most recent decision should take precedence.

Do you put on a mask?

I do, yes.

There are two general points of contention I have with this.

1: ASI would result in huge leaps and bounds in technology that would push the Malthusian condition very far out into our future. For example, we would not be restricted to our resources and energy on Earth. There's always the rest of our solar system, and huge amounts of energy could be harnessed via Dyson swarm. Additionally, an ASI might crack the problem of interstellar travel - manned trips to our neighbouring star systems don't seem undoable for an ASI, and we could spread outward from there. A Malthusian condition would certainly rear its head eventually (especially considering that our reachable universe is limited), but this would take a very long time, likely enough for humans to radically change before we hit it.

2: When we hit a Malthusian limit, in this case this doesn't necessarily mean that the loss of traits just stops. It just means that the ASI will have to engage in some form of population control to manage it. That doesn't change the fact that the AI as caretaker is distributing us all of our resources without any work necessary on our part, and policing us to make sure we don't steal from each other's allotments so we can get more than our fair share (presumably that would be undesirable). This essentially makes it so that the pressures that maintain physical and cognitive ability as well as mate preferences based on physical and cognitive ability are now gone. Some of us might not get to breed under a Malthusian condition, or we might all breed less, but in this scenario the ones that do breed are likely not doing so because of their intelligence or capability. It doesn't effectively screen out the deleterious mutations that cause loss of functioning.

I watched it today without any prior context of the Cyberpunk games and yeah, it's a decent show.

The characters and visual aesthetics are clearly the strong point here for me - David as a main character is incredibly likeable and it's very hard not to root for him, which is a task that most modern writers seem to completely fail at. The plot is quite basic and exists almost solely to serve the world and characters, but it does that job well. It benefits from being fairly short and fast-paced for a TV show too. I do feel the soundtrack didn't fit well with the cyberpunk aesthetic, but that might just be my personal preference in music bleeding through.

I didn't shed tears at the end, maybe because I saw it coming a mile away considering all the foreshadowing beforehand, and/or maybe because it simply takes a lot to get an emotional reaction out of me compared to other people. I don't feel like this show was particularly gritty or dark either, I think that despite the fact that the show is portraying a dystopia they keep a streak of hope a mile wide going through it (though this might be because I'm extremely inured to bleakness in my entertainment).

Anyway, it was fun and a good way to spend my weekend.