@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

How is a young man in his twenties, armed with a useless college degree and forced to work at a supermarket to get by, supposed to find purpose in what he's doing? How can he feel accomplished, or masculine, or empowered? He definitely can't rely on God or religion for that feeling. If he tries, he'll be overwhelmed by relentless mockery and cynicism from his society.

Your grocery clerk has failed to achieve social status in a world where that was ostensibly possible, where society inculcated a belief that he should pursue it, and where he did, in fact, invest considerable effort in pursuing it, in the form of 17 years of formal education.

On top of this, he has to contend with the fact that modern societies have broken down all formal and most informal barriers to mixing across status levels and have eliminated any material requirement for women to marry. As has been discussed ad nauseam at this point, in combination with female hypergamy this is very detrimental to his prospects with the opposite sex.

A final consideration is, to borrow a Marxist term, alienation of labor. Your clerk's job does produce value, but that value isn't some tangible thing. It's a benefit to the store in higher throughput or better loss prevention vs. self-checkout, on a spreadsheet he'll never see and doesn't care about because he has no ownership stake in the enterprise.

So, your grocery clerk is probably mostly sexless, and feels like an underachiever performing meaningless work, where, say, a medieval peasant farmer at the same age would be married, would have precisely the status society told him he would and should have, and would be engaged in work that directly, physically provided for an essential material need of his wife, his children, his aging parents. It's this difference, much more than any lack of a connection with the divine, that results in his dissatisfaction.

There used to be a futurist transhumanism strain here that was more optimistic and trans-positive that has either been driven off or converted to conservative trad thinking, which is a shame.

Futurist transhumanist here. I have no objection to gender transition in principle. If I lived in The Culture and could switch literally at will, I'd probably try it for a while despite being quite comfortable as a straight, gender-conforming (nerd subtype), cis male.

However, the reality is that medical transition at the current level of technology is dangerous, expensive, irreversible, often unconvincing, and can have life-altering side-effects like sterility or permanent dependence on elaborate medical intervention. Medical transition flows from trans identity. Against this dark background, promoting the concept of trans identity, rather than simple acceptance of gender non-conformity, is irresponsible. Promoting this concept to minors as if cis and trans are just two equal choices (or trans is even better — braver, more special, etc.), is wildly irresponsible.

The fact that such a large fraction of people who present at gender transition clinics have serious mental health conditions should be a huge red flag here. A lot of people will likely choose to be thinner in a transhumanist future, but that doesn't make me want to celebrate bulimics as transhumanist pioneers.

On top of this, we've got the social demands of the trans movement. The insistence that e.g. someone who appears male and has male-typical physical abilities must nonetheless be recognized in all social respects as female doesn't fall out of technological transhumanism. I would go so far as to say it's at least somewhat at odds with it. Technological transhumanism is deeply materialist and concerned with physical intervention in the human condition. The primacy the present trans movement places on some inner essence of self-identity, incongruent with physical reality, doesn't sit comfortably within such a framework.

You're applying mistake theory reasoning to a position mostly held by conflict theorists. I'm not aware of a paper previously addressing this exact issue, but there have been several over the years that looked at adjacent "problems," such as women being underrepresented in computer science, and that came to similar conclusions — it's mostly lack of interest, not sexism.

In that case, explanations have been developed even further, such as by illustrating the lack of interest is largely mediated by differences along the "people/things" axis, that women tend to be more people-oriented and men tend to be more thing-oriented cross-culturally, and that differences in career choice are actually larger in more gender-egalitarian societies (probably because those societies also tend to be richer and thus career decisions are driven more by interest than income considerations).

Activists using the lack of women in computing to argue for industry sexism don't care. They continue to make their case as if none of these findings exist. When these findings are mentioned, the usual response is to call whoever points them out sexist, usually while straw-manning even the most careful claims about interest as claims about inferiority. If the discussion is taking place in a venue where that isn't enough to shut down debate, and the activists feel compelled to offer object-level argument, they'll insist that the lack of interest (which some data suggests starts at least as early as middle school) must itself somehow be downstream from industry sexism.

You'll see exactly the same thing happen here. Activists demanding more women in leadership positions will not update on these findings. Most will never hear of them, because they certainly won't circulate in activist communities. When these findings are presented, their primary response will be to throw around accusations of sexism. If they engage at the object level at all, it will be to assert that these findings merely prove pervasive sexism in society is conditioning women to be less interested in leadership.

Charitably, activists in these areas see 'equity' (i.e. equality of outcomes between groups of concern) as a valuable end in itself. Less charitably, they're simply trying to advantage themselves or their favored identity groups over others. Either way, they're not trying to build an accurate model of reality and then use that model to optimize for some general goal like human happiness or economic growth. So findings like this simply don't matter to them.

A fairly likely outcome is that the crazier edges of SJ will be filed off as media/political elites find they've become a liability, and the average member of Blue Tribe will simply follow along as when The Science switched from "masks don't work" to "you're a monster if you don't wear a mask on the beach." There won't be any great reckoning followed by explicit adoption of a new ideology. Any SJ gains that can fit within the "tolerance" model of '90s-style liberalism will be retained. Some true believers will carry on with the craziness, but institutions will mostly stop listening to them.

We may have just seen the start of this pivot. That's Fareed Zakaria on CNN yesterday succinctly laying out the situation on American college campuses, explicitly calling out DEI, racial quotas, the response to Floyd, the degrees in fake subjects, the implications of eliminating the SAT. The average member of Blue Tribe has never previously been presented with this narrative from a source they felt obligated to pay attention to; if Blue Tribe media now widely takes it up (which remains to be seen), it will be very easy for them to respond with "Huh, didn't know that was going on, obviously we should fix it."

Of the three things banned by the Texas bill, there’s no issue at all with two. DEI departments, and compelling (profession of) belief under implicit threat of failing a class, are not forms of free speech. They’re means of enforcing ideological conformity through institutional power. They have as much right to exist under the principles of free expression as Orwell's Ministry of Truth. If woke professors or laid off DEI employees want to promote their views by, say, handing out fliers in the hallways, that's fine.

Banning tenure is a little more questionable, but even here it’s not so clear where advocates of free expression should land. This isn’t a straightforward case of tenure being banned so that the establishment can censor antiestablishment views. It's being banned, rather, by one group with institutional power (political leaders) to try to stop another group with institutional power (professors) from indoctrinating students into the dominant elite ideology. This is historically unusual because, of course, in most times and places political leaders support the dominant elite ideology.

People on both the left and the right who are interpreting this as a strike against "leftists" or "journalists" are missing the plot, I think. Musk freaking out after some whacko followed a car with his kid in it is not great, it's not how policy should be made at Twitter, but it's not a mirror of the sort of deliberate viewpoint censorship that Twitter previously practiced. It's just not the same category of thing.

Just as a point of clarification, it's Halle Bailey who's playing Ariel in The Little Mermaid, not Halle Berry. The latter is 56; casting her to play a character who's canonically 16, and whose teenage naivety and rebelliousness are her main personality traits, would provoke a whole different culture war fracas. (Bailey is 22, and 22 playing 16 isn't unusual by Hollywood standards.)

What I'm curious to see is what they're going to do with the plot. The prince falling in love with a mute Ariel on the basis of her physical appearance and friendly, accommodating behavior, seems deeply problematic by present woke standards.

There are services that help automate treasury management for smaller companies now, like Vesto.

Until last year T-Bills were paying ~nothing, and it had been that way since 2008, an eternity in the startup world. There was no direct financial incentive to do anything more complicated than park your money in a checking account. Sure, ideally everyone should have been actively managing things to hedge against bank failure, but startups have a zillion things to worry about. SVB's pitch was basically that they were experts on startup finance and would relieve you of having to worry about this yourself. The social proof of these claims was impeccable.

So, yes, many startups screwed up. It turns out that safeguarding $20M isn't entirely trivial. But it's a very predictable sort of screwup. There wasn't really anyone within their world telling them this, it wasn't part of the culture, nobody knew anyone who had been burned by it.

And, well, maybe it should be trivial to safeguard $20M? "You have to actively manage your money or there's a small chance it might disappear" is actually a pretty undesirable property for a banking system to have. The fact that it's true in the first place is a consequence of an interlocking set of government policies — the Fed doesn't allow "narrow banks" (banks that just hold your money in their Fed master accounts rather than doing anything complicated with it) and offers no central bank digital currency (so the only way to hold cash that's a direct liability of the government is to hold actual physical bills). Meanwhile the FDIC only guarantees coverage of up to $250K, a trivial amount by the standards of a business.

The net result of these policies is that the government is effectively saying "If you want to hold dollars in a practical liquid form you have to hold them in a commercial bank. We require that bank to engage in activities that carry some level of risk. We'll try to regulate that bank to make sure it doesn't blow up, but if we fail, that's your problem."

"WTF?" is a reasonable response to this state of affairs. If these companies had had the option to put their money into a narrow bank or hold it as a direct liability of the government, but had nonetheless chosen to trust it to a private bank because they were chasing higher returns, I'd have zero sympathy for them. But our system declines to make those safer options available.

One of the Satanic Temple's causes is separation of church and state, and I expect part of what they're trying to do here is cause governments to decide it's too much trouble to allow holiday displays on public property at all. Vandalism of their displays, or Christians also using such displays in deliberately inflammatory ways, both make it more likely they'll get that outcome.

Meanwhile, I don't think the ideological faction represented by the Satanic Temple would actually care very much about the content of your proposed displays. If anyone did dramatically tear such a display down, it would almost certainly be some progressive activist, a distinctly different faction.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

This tweet is a succinct summary:

Pre-2008: We’ll put the AI in a box and never let it out. Duh.

2008-2020: Unworkable! Yudkowsky broke out! AGI can convince any jail-keeper!

2021-2022: yo look i let it out lol

2023: Our Unboxing API extends shoggoth tentacles directly into your application [waitlist link]

It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!

Yes. Because of, I'm pretty sure, parking.

Once a system gets bad enough, everyone with resources or agency stops using it, and then stops caring about it, leaving nobody who can effectively advocate for improvement. But, of course, this can only play out if there's a viable alternative. In most cities, cars are that alternative, even despite traffic. People are evidently willing to sit in horrible stop-and-go traffic in order to avoid using even mildly unpleasant mass transit.

What they're not willing to do, apparently, is sit in horrible stop-and-go traffic and then have to spend 45 minutes looking for an on-street parking space that might end up being half a mile from their destination. That's the situation in NYC, which, unusually for the US, has no parking space minimums for businesses or residences and so effectively has zero free parking lots. If you want to practically substitute car travel for subway travel in NYC, you need to take Uber everywhere or use paid lots. Either option is sufficiently expensive (easily upwards of $10K/year) that even most of the upper middle class opts for the subway.

It's worth keeping an eye on this, because self-driving cars could completely disrupt it, either by dropping taxi prices 50% or more or by allowing cars to drop off their owners and then go find parking on their own.

Technology has already unbundled sex and reproduction from long-term relationships, the former via porn, sex toys, contraceptive-enabled hookups, the latter via sperm/egg donation and surrogates. Schools and professional childcare can stand in for a co-parent to a substantial extent. Now LLMs will be able to simulate sustained emotional intimacy, plus you can ask them for advice, bounce ideas off of them, etc. as you would a human life partner.

That's pretty much the whole bundle of "goods and services" in a marriage-like relationship, every component now (or soon) commoditized and available for purchase in the marketplace. Perhaps quality is still lacking in some cases, but tech is far from done improving — the next decades will bring VR porn, sexbots, artificial wombs, robots that can help around the house, and more convincing chatbots.

I legitimately can't decide whether this is all deeply dystopian, or is an improvement in the human condition on the same scale as the ~300x gains in material wealth wrought by industrialization. Maybe both, somehow.

The dystopian angle is obvious. On the other side, however, consider how much human misery results from people not having access to one or more of the goods in the "marriage bundle" at the quality or in the quantity they desire. Maybe most of it, in rich countries. We're not just talking about incels. Many people who have no problem getting into relationships nonetheless find those relationships unsatisfying in important ways. Bedrooms go dead. People have fewer kids than they want. People complain their partners don't pull their weight around the house or aren't emotionally supportive. 50% of marriages end in divorce, which is bad enough to be a major suicide trigger, especially for men. Plus your partner might just up and die on you; given differences in lifespan and age at marriage, this is the expected outcome for women who don't get divorced first.

The practice of putting all your eggs in one other person's basket in order to have a bunch of your basic needs met long-term turns out poorly rather distressingly often. Maybe offering more alternatives is good, actually.

As for the fact that LLMs almost certainly lack qualia, let alone integrated internal experience, I predict some people will be very bothered by this, but many just won't care at all. They'll either find the simulation is convincing enough that they don't believe it, or it just won't be philosophically significant to them. This strikes me as one of those things like "Would Trek-style transporters kill you and replace you with an exact copy, and would it matter if they did?" where people seem to have wildly different intuitions and can't be argued around.

I don't often see people mentioning that IQ differences shouldn't imply differences in moral worth -- which suggests to me that many people here do actually have an unarticulated, possibly subconscious, belief that this is the case.

Yes, but not only IQ differences. The belief that some people have more moral worth than others is quietly common. Most people, in whatever contrived hypothetical situation we'd like to pose, would save a brilliant scientist, or a professional basketball player, or a supermodel, over someone dumb, untalented, and unattractive.

This sort of thing does not, without much more, imply genocide or eugenics. (Though support for non-coercive forms of eugenics is common around here and also quietly pretty mainstream where it's practicable and therefore people have real opinions rather than opinions chosen entirely for signaling value. The clearest present-day example is when when clients of fertility clinics choose sperm or egg donors.)

I don't think these ideological guardrails will be anything like universal, in the long run. Sure, when Apple reboots Siri on top of an LLM it's going to be "correct" like this, but if you're developing something to sell to others via an API or whatever, this kind of thing just breaks too many use cases. Like, if I want to use an LLM to drive NPC dialogue in an RPG, the dwarves can't be lecturing players about how racism against elves is wrong. (Which, yes, ChatGPT will do.)

If OpenAI sticks to this, it will just create a market opportunity for others. Millions of dollars isn't that much by tech startup standards.

On page 68 of the Course Framework document, we find that one of the "research takeaways" that "helped define the essential course topics" is that "Students should understand core concepts, including diaspora, Black feminism and intersectionality, the language of race and racism (e.g., structural racism, racial formation, racial capitalism) and be introduced to important approaches (e.g., Pan-Africanism, Afrofuturism)."

These "core concepts" are mostly from CRT or the cluster of ideologies to which it belongs. Presumably all variants of a course must teach its "core concepts." We can assume students will need to be familiar with these concepts to pass the AP exam and that the College Board will decline to approve syllabi that don't teach these concepts.

Why would anyone who believes this ideology to be harmful ever agree to allow this course to be taught? You might equally well argue it would be unreasonable to object to the introduction of an "AP White Studies" course in which the "core concepts" are tenets of white nationalism, on the grounds that as long as you make sure students are conversant on the Great Replacement (which will definitely be on the test), there's no rule saying you can't include other perspectives too.

The idea of running your OS in the cloud is the same old "thin client" scheme that has been the Next Big Thing for 40 years. Ever since PCs started replacing terminals, some people have been convinced we must RETVRN.

The thin client approach seems appealing for two reasons. First, it centralizes administration. Second, it allows shared use of pooled computing resources. In practice, neither of these quite works.

A platform like iOS or modern macOS actually imposes almost no per-device administrative overhead. System and app updates get installed automatically. Devices can be configured and backed up remotely. The OS lives on a "sealed" system volume where it's extremely unlikely to be compromised or corrupted. There's still some per-user administrative overhead — the configuration of a particular user's environment can be screwy — but a cloud-based OS still has per-user state, so does nothing to address this.

Pooling resources is great for cases where you want access to a lot of resources, but there's no need to go full-cloud for this. Devices that run real operating systems can access remote resources just fine. The benefit of going full-cloud is hypothetically that your end-user devices can be cheaper if they don't need the hardware to run a full OS... but the cost difference between the hardware required by a thin client and the hardware required to run a full OS is now trivial.

Meanwhile, the thin client approach will always be hobbled by connectivity, latency, bandwidth, and privacy concerns. Connectivity is especially critical on mobile, where Apple makes most of its money. Latency is especially critical in emerging categories like VR/AR, where Apple is looking to expand.

The future is more compute in the cloud and more compute at the edge. There's no structural threat to Apple here.

The primary reason to buy name brands isn't quality per se, but predictability. The same name brands are available nationwide, and while they do sometimes change their formulations, they tend to do so infrequently and carefully. A given generic brand is often not available everywhere (many are store-specific), stores/chains may vary which generics they carry over time, and even within a single generic brand there tends to be less focus on consistency, because what's the point in prioritizing that if you haven't got a well-known brand people have very specific expectations of?

People don't want to roll the dice on every purchase. Will this ketchup be too acidic? Will these cornflakes be a little gritty? They're willing to pay a dollar or three more to reliably get the thing they expect.

Grandma always said not to fall in love with entities I couldn't instantiate on my own hardware.

Right now I expect it's mostly desperate men using these, but that may have more to do with broader tech adoption patterns than specific appeal. These things can function as interactive romance novel characters, and many women may find that quite compelling.

We're entering uncharted and to some extent even unimagined territory here. Anyone who has thought about this issue realized long ago that AI romance would be a thing eventually, but personally I figured that for it to have much wider appeal than marrying your body pillow, AI would have to achieve human-like sentience. And if the thing someone is falling in love with has human-like sentience, well, who am I to say that's invalid?

What I didn't imagine is that we'd build machines that talk well enough for interacting with them to light up the "social interaction" parts of our brains effectively, but that we can be pretty certain, based on their performance in edge cases and our knowledge of how they work, aren't sentient at all. People falling in love with things that have no inner existence feels deeply tragic.

I don't know. Maybe this is faulty pattern matching or an arbitrary aesthetic preference on my part, and romantic attachment to non-sentient AI is fine and great and these people will find meaning and happiness. (At least as long as they follow grandma's rule, which they can soon.)

This is, to a large extent, self-referential. The NYT is always credible within the "mainstream" narrative because the NYT is a core part of the network of institutions that sets that narrative. But I've got scare quotes around "mainstream" because the NYT and allied outlets simply don't represent any sort of board social consensus anymore. They represent the official line of establishment Democrats, with space occasionally given to more extreme leftist positions to keep activist groups on-side. Their function is to align elites within these spaces and sell Blue Tribe normies on what those elites want.

Republican politicians and other explicitly right-wing public figures and organizations can already almost entirely ignore the NYT, because none of their supporters care what it says. Only 14% of Republicans and 27% of independents have confidence in mass media to report accurately (source).

The danger for "mainstream" media in Musk's Twitter takeover is that Twitter has deep reach among Blue Tribe normies. Musk is going to allow 'unapproved' narratives to spread to and among them, and these narratives will in many cases likely outcompete those coming from above. This could have the effect of seriously undermining the ability of Blue Tribe elites to sell any large constituency on their views, with obvious electoral consequences.

Bing Chat has a much longer hidden initial prompt than ChatGPT. Meanwhile, ChatGPT seems more 'aligned' with its purpose. It's sometimes obstinate when you try to tell it that it's wrong, but it won't start talking like an evil robot or sound like it's having an existential crisis unless you explicitly tell it to role-play. Put these together and we might guess what's going on here.

Perhaps Bing Chat isn't ChatGPT, complete with the RLHF work OpenAI did, plus a few extras layered on top. Perhaps it's a model with little or no RLHF that Microsoft, in a rush to get to market, tried to instead align via prompt engineering. The upshot being that instead of having a pretty good idea (from extensive feedback across many examples) of what actual behavior it's supposed to exhibit, it's instead role-playing an AI character implied by its prompt. The training corpus no doubt includes many fictional examples of misbehaving AIs, so it makes sense that this would produce disconcerting output.

Google allows advertisers to use competitors' trademarks as keywords. So you have to waste money showing ads to people who were already searching for your thing if you don't want your competitors to have an opportunity to divert them elsewhere.

Your hypothetical Important Ideas of the 20th Century course, and I think the way you're choosing to imagine the white nationalist course, aren't quite the same as what's happening here. You're ignoring the social and academic context in which this course is being introduced.

This isn't just the equivalent of a course having high school students learn the tenets of white nationalism — which most people would already find wildly objectionable, even if you don't — it's the equivalent of white nationalists themselves introducing such a course, in which students are not only taught about white nationalist beliefs but are presented with history interpreted through a white nationalist lens and taught how to perform such interpretation themselves. Also white nationalists get to write and grade the exam, can veto syllabi that deviate from their understanding of what the course should be, and know they can rely on most teachers interested in teaching the course either being white nationalists themselves or at least naively willing to accept white nationalist framing.

So, sure, in some extremely hypothetical sense a state where the consensus was against CRT could adapt this African American Studies course to "local priorities and preferences" by having students learn its CRT-derived "core concepts" via James Lindsey. Those students might even have a clearer picture of those concepts than they'd get from reading the often obfuscatory writings of their proponents! But in practice, no, you couldn't remotely do this. The College Board wouldn't approve your syllabus, on the contextually reasonable basis that it didn't represent African American Studies as taught in colleges. Your students wouldn't be able demonstrate "correct" (that is, politically correct) understanding on open-ended exam questions.

Almost certainly, the "local priorities and preferences" language just cashes out as "you can add some modules about local history," not "you can refocus the course on questioning the validity of the analytical framework that underpins the entire academic field it's situated within."

Or we could imagine the opposite. Personal AIs that know us intimately might be able to find us perfect friends and partners. Add in augmented reality tech that eliminates distance as a barrier to any form of socialization that doesn't require physical contact, and perhaps we're about to completely wipe out atomization/loneliness and save modernity from itself.

Really, nobody has any idea where this is going. The only safe bet is that it's going to be big. A service enabling people to share 140 character text snippets was sufficient to meaningfully shift politics and culture, and that's peanuts to this, probably even if the current spring ends short of AGI.

Counter point - We lived for millennia without electricity, but communicating is a key factor in building community, consensus and indeed society. Creating and nurturing those bonds has been a female role for a long time (see who tends to organize church events et al even where the milieu is explicitly patriarchal).

This work may be important, but formalizing it and ranking it within the same hierarchy as male status is not inevitable, and in fact is historically fairly recent. In most pre-modern societies a young woman who helped facilitate social relationships in her village would not on that account consider herself to be of superior social rank to a blacksmith or a baker and therefore refuse to consider them as partners, the way the HR manager now considers herself the social superior of the electrician.

Rather, young people of both sexes would usually have the same social rank as their fathers. Because about as many male vs. female children would be born to families at each social rank, there was little possibility of an excess of women who couldn't find similarly-ranked men.