@ThenElection's banner p

ThenElection


				

				

				
0 followers   follows 3 users  
joined 2022 September 05 16:19:15 UTC
Verified Email

				

User ID: 622

ThenElection


				
				
				

				
0 followers   follows 3 users   joined 2022 September 05 16:19:15 UTC

					

No bio...


					

User ID: 622

Verified Email

But... There's no way that Aella would actually have trouble finding a partner who wants kids who is okay with her lifestyle. Not some captain of industry, but also not some random meth addict on the street either. There are plenty of total simps in tech with a solid paycheck who'd be thrilled to go for her, and she knows that.

This is all a marketing gimmick. Come save the poor whore with a heart of gold and a mind of platinum!

I often use it as a lookup tool and study aid, which can involve long conversations. But maybe that falls under "as a tool."

The last time I had a bona fide conversation with an LLM was maybe three months ago. These actual conversations are always about its consciousness, or lack thereof--if there's a spark there, I want to approach the LLM as a real being, to at least allow for the potentiality of something there. Haven't seen it yet.

Rates of completed suicide would work, at least in the senses I'd care about. Link a voter file with death certificates.

Unfortunately, no one has done this yet, afaict. There are state-aggregate studies showing that people in conservative states have higher suicide rates than those in liberal states, but ecological fallacy. I'm also not sure how to correct for demographics--or, rather, whether it makes sense to, since many of the same factors that correlate with suicide also correlate with Republican party affiliation.

I have used Hinge, however, and basing success on likes received is enough to make me discount the study before I even look at the data.

Although the Hinge post that included their top line numbers has been scrubbed, it's still available on Wayback. They address your point directly:

When we look at the rate of men forming connections – rather than the rate that they are sent initial likes, as we did before – we find that index of inequality greatly decreases.

With straight men on Hinge, the Gini index of connections comes down to 0.324, or approximately the UK — a huge improvement.

As an aside, this movement toward equitability when dealing with connections exists with straight women too, so much so that the Gini index becomes meaningless.

That, arguably, supports your point (things are substantially less dire than looking at raw likes), though I think the credibility depends on how the junior data analyst defined "forming a connection."

What I'd love to see is the Gini coefficients of mutual matches for different dating apps in 2025.

Eh. I'm sympathetic to your point, but I don't buy all the conditions (or even strictly any of your conditions, except for obesity, which if you can provide health insurance for your family is itself fixable) are necessary for a good wife. Many of them are also heavily correlated: condition a woman on simply being college educated and having a professional career, and the majority probably meet your criteria.

A woman could do the exact same thing: list 10 traits that are requirements for a man and calculate how large her dating pool is. And in fact we did this for a single female friend who was bemoaning her dating situation: when we added up all her requirements, there was an expectation of only a couple dozen men in the entirety of California who met them. Is that a sign of how bleak women have it, or more a sign that her requirements were unrealistic?

And, speaking from experience, I'm a 5'3" bisexual guy in what's probably the toughest dating market for men in the USA, and I managed to get happily married, though currently living the degenerate DINK lifestyle. And I've notched up more (opposite-sex) partners than my wife. My personal most-restricting requirement was to find a single woman who made approximately as much as me, which is likely more restrictive than all your requirements put together.

All that said, it's absolutely true that (at least initially) dating is harder for men than for women. I'm not sure that anyone would dispute that, though, and I don't think your model provides good evidence for it. Better would be to come up with some quantitative and more direct measure for how hard dating is (for each percentile of attractiveness) and estimate it to provide a comparison.

Or, if the goal is to actually solve the problem, learn to exhibit masculinity, lift weights, and constantly put yourself out there and cast a wide net.

In my experience, the typical elite undergraduate student is a capable smartish rule follower, regardless of if they're international or domestic. Dirt poor internationals don't ever make it to elite schools, and dirt poor domestics rarely do. The dirt poor domestics aren't particularly brilliant.

The occasions where someone is brilliant are rare, and they tend to be children of middle class professionals, regardless of if they're international or domestic. They do attend at higher rates than typical universities.

Technical PhDs are always smart. Masters students are universally idiots.

LLMs generate gossip and tabloid drama about real celebrities; they wouldn't have any issues doing the same about AI-generated celebrities.

It will be a gradual process: first generating all the extras; then improving the real performances of real actors; then generated performances of dead actors; then licensed generated performances of live main actors; and then entirely generated main actors. And it won't be admitted at first. But having a reliable actor who always turns up sober and on time, looks like and does what exactly you want them to, has no time constraints, and doesn't take a substantial cut of the profits is a massive pull.

And if audiences insist on being sold a real life backstory about the actors to form parasocial relationships with them, well, Hollywood will be happy to generate and sell that to them too.

Women date and, to a lesser extent, marry and reproduce with lots of untrustworthy men. That doesn't mean that the men they don't date are trustworthy, but it does suggest that trustworthiness isn't the primary blocker. And if you're a man who can't get a date and wants one, it's better to focus on changing other aspects of yourself than some fuzzy concept of trustworthiness. Those other aspects being those that fall into the broad category of attractiveness, almost tautologically.

Sure. But I'm reminded of one applicant to Stanford whose admission essay about what matters to him was

#BlackLivesMatter #BlackLivesMatter ... (I'll spare readers the middle portion) ... #BlackLivesMatter #BlackLivesMatter

He got in.

Neither crassly based nor woke should have a place in universities, but the standards applied for crassness are very much not equal across the ideological spectrum.

Palantir recently started offering a "Meritocracy Fellowship" (https://jobs.lever.co/palantir/7fa0ceca-c30e-48de-9b27-f98469c374f3) to tackle this from another angle: cut out the middleman directly. Recruit smart students straight out of high school based on objective measurements, pay them, and hire them directly after the program.

The big risk for the student: what if they don't get hired by Palantir? Will they, after four successful years there as a FTE, have enough prestige to get competitive market offers?

For Palantir, it's near a pure win. Get to rely on an IQ-proxy, pay them a relative pittance for several months, and then select the best X% of performers from that to make low ball offers to.

If a student wrote a "based" indigenous studies essay, would that help them pass the class to get the degree they're paying two hundred thousand dollars for?

Of course, there's the opportunity to write and think about things that aren't either kind of slop. But I'm very skeptical that equal standards would be applied. Though I would say it's unlikely for any student to actually flunk out of Columbia for the content of their essays (or the quality of them, or anything really).

Curious: what would be the legalities involved in running a scam like your conspiracy theory? The demand for this kind of stuff seems to far outstrip the supply of it. Could I go hire two people off of Craigslist, engineer a scripted social media outrage that results in one or both being able to successfully fundraise for ???, and then split the proceeds with them? Assuming I'm not paying them to do anything actually illegal.

The hardest part would seem to be me getting my cut (no real reason for them to pay me anything once they realize I'm not assuming any real risk or doing anything). Or maybe use an AI generated video with no other real people involved?

the tariffs have not in fact collapsed the economy, while the institutions' commitment to being paranoid ninnies about covid did

I still find it shocking that everyone important just agreed to never talk about COVID and our response to it again. I don't know how anyone can discuss bad policies that wreck the economy without at least bringing it up to refute it being an example. It's like remembering the COVID shutdowns happened is icky and uncouth.

Republicans have a failure mode of cult of personality; Democrats have a failure mode of cult of not personality or ideology but of whatever bureaucrats and various cultural elites organically land on as the Important Signifier of the day. It's distinctly less personalistic. People learned that the message of the day was that Biden is the greatest person in the whole world, and they knew questioning it made you a Bad Person who must be punished. But the Democratic blob recognized a weakness in the candidate that they couldn't paper over and, in the span of a few weeks, shivved him, memory holed him, and made Harris the greatest person in the whole world. That dynamic does not and could not exist with Republicans and Trump. In Presidential politics, Democrats perform a kind of pseudo-personalism: the point of acting as if you believe X is the Great Person of History is not to indicate any true belief but to indicate tribal membership. Biden dead-enders were heavily marginalized everywhere a day after Harris became the heir apparent, if Biden dead-enders even ever existed.

It's not as clear to me as it is to Scott, though, that one cult is clearly less damaging than the other. The reaction to COVID did far more damage to our economy and wellbeing than the tariffs will (and I believe the tariffs are ridiculous and incredibly damaging), and that can be squarely laid at the feet of the neoliberal bureaucrats.

I mean it in the sense that LLMs are capable of creating a token stream that is identical to an AI researcher. This is mathematically proven--see various universality theorems--but has the critical drawback that it doesn't really give you any information on how to find that optimal set of weights.

A MLP absolutely could also do this, or even some absurd polynomial best fit (not, however, a ten or quadrillion dimensional linear model). What MLPs offer over polynomials and transformers offer over MLPs is increased training efficiency and stability for actually finding those weights.

We know that systems capable of acting like smart humans are possible (after all, there are smart humans). Will LLMs get us there? It's unclear. Could they, in the arid sense that there is some unknown collection of weights that would be capable of outputting tokens that simulate an OpenAI researcher working on novel tasks? Absolutely. (As to how to actually learn those weights, that's left as an exercise to the reader.)

I think the dynamism of the research program is relevant, though. Right now, you can, as an individual, decide to spend a quarter and a couple thousands in compute to research a particular area of LLMs and have a reasonable expectation of finding something interesting, and sometimes it's actually useful. This isn't merely hypothetical but is something happening every single day. There is a lot of low hanging fruit. Might there be some collection of a dozen different improvements on the horizon which, when taken collectively, would get us to AGI? Maybe. It's plausible, at least, while it's not plausible that a dozen different innovations are on the horizon that would enable a cheap base on Mars.

It depends on Amazon's exact implementation, but I just assumed it was a way for them to list a lower sticker price than what customers actually pay and expect the customer to just go through with the purchase anyway.

Similarly, back in 2021 due to increasing labor costs, a whole bunch of restaurants did start adding a fixed percentage "service fee" on the final bill that went into their general revenue (not a replacement for tipping). Was that anti-Biden?

I don't think so, and it wasn't perceived as such, the main difference being that Biden wasn't intentionally positioning himself as an advocate of higher costs in the same way Trump has positioned himself as an advocate of tariffs.

The flyers, from experience, are both needed and unheeded. And the floors heavier in engineering are worse about it.

I have no idea how much money my former employer wastes on toilet queuing time for men. The endless bouncing around from floor to floor trying to find an open stall. Apparently OSHA requires a certain number of toilets for a given employee count of either sex; when it's in the double digits, it's around 20 employees/toilet, but when it's in the triple, it increases to around 40/toilet.

At least we got to read weekly educational flyers posted in the bathroom about Testing on the Toilet (alongside other flyers asking engineers to remember to flush...)

As one thinker just posted on Truth Social an hour ago:

THE BEST DEFINITION OF INTELLIGENCE IS THE ABILITY TO PREDICT THE FUTURE!!!

/images/17446546611532226.webp

Human heads used to be bigger, though. And childbirth is much less likely to result in death now than before, thanks to human intelligence and the heroic efforts of professionals like yourself. And if increases in intelligence did offer a significant reproductive benefit, larger hips that enabled that intelligence would be selected for.

How valuable is intelligence?

One data point that I've been mulling over: humans. We currently have the capability to continue to scale up our brains and intelligence (we could likely double our brain size before running into biological and physical constraints). And the very reason we evolved intelligence in the first place was that it gave adaptive advantage to people who have more of it.

And yet larger brain size doesn't seem to be selected for in modern society. Our brains are smaller than our recent human ancestors' (~10% smaller). Intelligence and its correlates don't appear to positively affect fertility. There's now a reverse Flynn effect in some studies.

Of course, there are lots of potential reasons for this. Maybe the metabolic cost is too great; maybe our intelligence is "misaligned" with our reproductive goals; maybe we've self domesticated ourselves and overly intelligent people are more like cancer cells that need to be eliminated for the functioning of our emergent social organism.

But the point remains that winning a game of intelligence is not in itself something that leads to winning a war for resources. Other factors can and do take precedence.

This assumes that something like human level intelligence, give or take, is the best the universe can do. If super intelligence far exceeding human intelligence is realizable on hefty GPUs, I don't think we can draw any conclusions from the effects of marginal increases in human intelligence.

Sibling non-CWR post: https://www.themotte.org/post/1836/scott-come-on-obviously-the-purpose

Wrote a comment there, but another thought:

I think Scott is attempting a kind of meta-joke. TPOASIWID is a very useful lens to interpret systems through, but in widespread DR Twitter use, it's mostly used as a way to ascribe bad intent to systems. And because TPOASIWID, you can only judge TPOASIWID by the use of TPOASIWID on Twitter, and so TPOTPOASIWIDIWID and that's creating bad Twitter takes, which isn't valuable or useful. QED.

Cute, but it misses the mark. It's about finding useful ways to interact with a system, not a universal acid allowing you to weak man any argument or analysis.

"When a person shows you who they are, believe them."

No update on opinion. What it means to me: the most useful way to interact with a system is through modeling what it does and how it does it. Not what it says it does, not how it originated, not what its creator intended it to do, not what its subcomponents think it does, not what you want it to do, not what purpose it having would be the best for the world, not what the documentation says it does, not what the label on the tin says it does.

If you don't do this, you will run into trouble. For example, consider corporate DEI training sessions. The entire DEI training ecosystem, including outside trainers/consultants and corporate HR, will publicly state that they are doing it to help reduce bias and discrimination (along with some secondary claims around it increasing efficiency and innovation). Suppose an employee took this at face value, and he's deeply committed to racial DEI. He does some research, and it turns out in general these sessions increase discrimination and racism. And he does further research and is able to prove, with incontrovertible empirical evidence, that the sessions at his own company are making employees materially racist. He reports this to HR; surprisingly, they seem to ignore it. He thinks his report is being missed because of an overworked HR department, and so he publishes his research and evidence widely within the company.

What happens, do you think?

If you take HR's statements of their purpose at face value, you would expect them to effusively thank him for pointing this out to them, quickly remedy the situation as quickly as possible, and maybe even give him a bonus for his exceptional effort in helping them achieve their purpose better.

If you think the purpose of HR is instead to tick boxes to protect the company from legal liability and to join in into popular fads, you aren't as sanguine about the employee's future. You might even expect him to be called into HR for public desanguination.

When it comes to personal decision making, people who use one of these heuristics for ascribing purpose to impersonal systems are going to do much better than people who use the other.

Scott's post is, frankly, lame and disappointing. He doesn't even mention Stafford Beer and only has interest in responding to Twitter randos.

The 340 vs 440 score actually suggests the courts think that the retail job is harder than the warehouse job.

Apparently, the company had offered all of its retail employees the opportunity to transfer to the warehouse, and one of the plaintiffs had turned it down because the warehouse is loud and dirty and has very limited autonomy, and the only way she would ever take a job there is if... it offered a lot more money over her retail position.