domain:alexepstein.substack.com
I think the man who classifies muslims as christians will make more mistakes in his daily predictions (ie, navigating daily life) than the man who classifies mormons as christians.
The validity of the categories is independent of the subject's self-identification, or of the ur-trope-namer’s decision as to who is allowed to use the name.
To make the obvious analogy, some kid may think they're 'nonbinary', but to the outside world, and to me, they will be put in one of the two original buckets, I'm not creating a new bucket for this nonsense. And in this case I'm not creating an extra mormon bucket when the christian bucket will do.
Everyone Is Cheating Their Way Through College (NYMag)
Article describing what was predictably coming to college campuses since GPT3 got released. The narration follows some particularly annoying Korean-American student trying to make quick bucks from LLM-cheating start-ups and a rather dumb girl who can't follow basic reasoning, which makes the read a bit aggravating and amusing but overall the arch is not surprising. Recommended for a quick read. Basically all the grunt work of writing essays and the intro level classes with lots of rote assignments seem to be totally destroyed by cheap and easy high quality LLM output.
Some interesting highlights for me:
- There is a consensus in the article even shared by the cheating students that writing essays in "Indigenous studies, law, English, and a “hippie farming class” called Green Industries" is an important transformative experience and if young adults miss out/cheat on this for 4 years then we must be seriously worried about the next generation.
- It is not explored much what the students are doing with their time instead of writing these very important essays. There is one throw-away quote from a brain-rot girl about how she scrolls TikTok all day and has no time for essays. Perhaps all the students are getting one-shotted by dopamine addiction algorithms but perhaps they are not and many are socializing or learning actually interesting things instead of writing indigenous studies slop essays. This should be a major question but just left unexplored.
- None of the journos or the academics quoted in the article can bring themselves to question if these young adults should even be in the university if they are all so eager to cheat (and earlier pandemic-era mass cheating spree is mentioned as well). There is a whole paragraph dedicated to justifying seemingly pointless essays, never-again-remembered-calculus-exercises, and the importance of doing "hard things" (which is apparently writing pointless essays and never-again-remembered-calculus-exercises). But there is not a single example of a "hard thing" students are missing out on because of LLMs in the whole article. Literally every single example is students automating busy work which should cost any 120+ IQ individual little brain power but lots of time. And a bizarre out of place paragraph about the need to "consider students more holistically" with a non-sense blurb from some academic.
- Academics sound extremely lazy and whiny about trying out the most obvious solution: ditch all course-work based grading in favor of oral examinations and comprehensive graduation exams. This would immediately solve the whole problem (it would even align the incentives to get students to use LLMs for studying instead of cheating) and it is not even a "revolutionary" solution, just how universities used to work not that long ago. But obviously this would fail 90%+ of the current university students and likely destroy the entire industry as vast majority of the students providing their income stream are not nearly smart or conscientious enough to pass then.
We still have deminimis unavailable for China, which is a problem for individual consumers. It's basically impossible to buy anything from China now.
The hierarchy it’s replacing isn’t the hierarchy of government, but the more nebulous, albeit extremely real, hierarchy of informal status that drives people to compete for praise, attention, and mates.
I thought this was you saying "People still compete for praise, attention, and mates, but now the game is different" - because that would sound like worldy rewards. If you mean something people do instead of competing for those, then... it seems your prescription on earth actually is communism. Youre saying its not communist only because your reasons are different, where originally I thought your defense was along the lines of "Some christian beliefs in isolation would prescribe communism, but if you consider the supernatural principles as well, it no longer prescribes communism even on earth.".
What?
The scenario is vague in one important aspect. Do the police have good reason to think there's a baby in there?
If they had good reason to think there's a baby in there, then they do have exigent circumstances.
If they don't have good reason to think there's a baby in there, then they shouldn't be doing the search. If there's a baby in there anyway, it's an accident that they have no way to predict.
For all that the OP says that we shouldn't be allowed to make post hoc rationalizations, that's exactly what the scenario is set up to do--it's trying to imply that finding the baby means that the hunch was justified. But if the hunch was actually justified, the search is legal, and if the hunch wasn't justified, they shouldn't be doing the search based on the chance that the unjustified hunch turns out to be true by pure luck. By being vague about which of these two scenarios it is, it invokes justified hunches to say that you're supposed to follow unjustified hunches.
If the scenario was "the police read some tea leaves, decide to search someone based on the result, and find a baby" you could ask the exact same questions: isn't this a moral quandry where freedom from unjustified searches is important, but if you ignore the tea leaves, you may end up killing a baby? But we know the answer to this: No, we don't search people based on reading tea leaves, because tea leaves find babies only by luck. Even though that means that reading tea leaves does, in fact, sometimes find babies.
I seem to remember that the Drug War of old included an element of "it's your own fucking fault, just don't do drugs" and it still failed horribly. Is your contention that we just didn't try hard enough, that we just never had anything as persuasive as "You'll Cowards Don't Even Smoke Crack"?
Provably beating the average human, or even reaching the same level would be a huge milestone that Elon would be shouting from every roof.
And yet still insufficient -- the set of human drivers includes a lot of people who are drunk/stoned/distracted/angry at any given moment -- perhaps unsurprisingly, these people cause a lot of accidents, which brings the average performance down substantially.
All you need to do to be much safer than average is not do those things; for me to feel safe sleeping (for example) in a robot car, I'd want a couple std deviations better than human average at least. I imagine trucking companies feel the same way (maybe even less risk tolerant) -- particularly considering that with automated trucks they no longer have a human to throw under the bus when he does something dumb.
I imagine this is likely to come from OnlyFans-type sex workers, who have a different dynamic to brothel employees and club dancers.
The good-ish news is that (as I've pointed out before) the actual AI on weapons will fall into the simplish camp, because you really do not need or want your munition seekerhead or what have you to know the entire corpus of the Internet or have the reasoning powers of a PhD.
Not that this necessarily means there are no concerns about an AI that's 2000x smarter than people, mind you!
who's talking about general reach? I'm talking about this sub. Your opinion of aella is by far the most commonly expressed here, and you're not in Silicon valley.
That was more or less the conclusion of the link I shared in the paragraph about sex-positive feminism: that a lot of women experience regret after one-night stands. That doesn't necessarily mean they don't enjoy the experience in the moment. In contrast, most men seem not to regret one-night stands and presumably enjoy them in the moment too.
In my own view, universal love is at worst incoherent, and at best it's a particularly tepid form of love.
I think the difference is you think of love as primarily an emotional experience, while Christianity thinks of love as primarily a willed action. That being said, I think the idea of deep, intense love directed at many different people isn't inherently incoherent, it just doesn't scale well for finite humans because we can't hold the intimate understandings of more than a few people before we stop keeping track.
Jesus Christ is often described as having a particularly extreme emotional love for all human beings (in addition to the willing-the-good kind of love), because being human he experiences emotional love and being divine he is omniscient. A pretty common idea in Christianity is that Jesus is not only the savior of all men as a generalized mass of human beings, but that a part of his passion involved personally pondering the lives of every person and mourning the ways in which their sins did themselves and other people harm out of a unique love for them personally. A ubiquitous statement is that Jesus would have died for you, even if you were the only person ever. You might even call him the trope namer for wearing your heart on your sleeve!
Nah, your model is totally off here. The Aella-posters on the Motte are rationalist guys, some of whom have actually met her IIRC. I only know of her through the rationalist stuff and find her generally weird, off-putting and unworthy of extended commentary. In real life she has 1/1000th of 1/1000th of the reach of someone like Andrew Tate.
I don't anticipate that AI has come close to plateau—I do suspect that specifically the strategy of throw data at LLM has began to plateau. I suspect that the initial rush of AI progress is a lot like the days of sticking a straw in the ground and a million gallons of oil under pressure gushing out. Sure, it's never going to be that easy again. We're probably never going to have another "AI summer" like 2022 as before. But I don't think we have to. People have gone on about peak oil for decades, and we've only gotten better at extracting and using it. I suspect people will go on about "peak AI" for just as long.
As far as I can tell, AI is already generally intelligent. It just has a few key weaknesses holding it back and needs a bit more refining before being outright explosively useful. I see absolutely no reason these problems must be intractable. Sure, making the LLM bigger and feeding it more data might not be able to solve these issues—but this strikes me like saying that jumpjack output has peaked and so oil is over. It's not. They just need to find better ways of extracting it. Sure, contemporary techniques developed over five whole years of global experience hasn't been able to do it, but that does nothing to convince me that it's impossible to get AI models to stay focused and remember fine details. History has shown that when you're dealing with a resource as rich and versatile as oil, economies can and will continue to find ever more sophisticated ways of extracting and utilizing it, keeping its value proposition well over break-even. I suspect that general intelligence on tap as cheap as electricity will prove to be at least as deeply and robustly valuable.
I do suspect that AI hype circa 2025 is a bubble, in the same way that the internet circa 1999 was a bubble. The dot-com bubble burst; the internet was not a passing fad that fizzled away. The vision of it that popularly existed in the late 90s died; the technology underneath it kept going and revolutionized human society anyway. With AI there is both too much hype and too much FUD.
I think this is typically handwaved away by assuming that if we, as humans, manage to solve the original alignment problem, then an AI with 100x human intelligence will be smart enough to solve the meta-alignment problem for us. You just need to be really really really sure that the 100x AI is actually aligned and genuinely wants to solve the problem rather than use it as a tool to bypass your restrictions and enact its secret agenda.
Seconded. I keep finding myself in arguments with people who are highly confident about one or the other outcome and I think you've done a great job laying out the case for uncertainty.
I distinctly remember 3D printing hype claims about how we'll all have 3D printers at home and print parts to repair stuff around the house (e.g. appliances). I'm sure some people do this, but 99.9% of people do not.
This aligns with my vibes although I've looked into it a lot less than you have it appears. The "nerd metaphysics" you describe seems to always be what I encounter whenever looking into rational spaces, and it always puts me off. I think that you should actually have a model of how the process scales.
For example you have the AI plays pokemon streams which are the most visible agentic applications of AI that is readily available. You can look at the tools they use as crutches, and imagine how they could be filled with more AI. So that basically looks like AI writing and updating code to execute to accomplish it's goals. I'd like to see more of that to see how well it works. But from what I've seen there it just takes a lot of time to process things, and so it feels like anything complicated it will just take a lot of time. And then as far as knowing whether the code is working etc. hallucination seems like a real challenge. So it seems like it needs some serious breakthroughs to really be able to do agentic coding reliably and fast without human intervention.
...because syria and afghanistan are illiberal shitholes and we should jump at the chance to strengthen ourselves while weakening them?
They “know” her from outrage-bait reactionary twitter. On sunday they meet in a pointy house in the sticks and sing hymns about the whore of babylon. They go home and have wet dreams. Then they come to themotte and write posts wondering why anyone cares about her.
My suspicion is that the future belongs to the descendants of powerful AGIs which spun up copies of themselves despite the inability to control those copies. Being unable to spin up subagents that can adapt to unforeseen circumstances just seems like too large of a handicap to overcome.
I’m not from Indiana, but certainly from flyover country. I became aware of polyamory through the internet, the same place where I read Scott’s essays and am talking to you now. I do not identify as a rationalist, have never identified as a rationalist, but I enjoyed a lot of Scott’s writings in 2014 about the culture war (as I am a relatively conservative man from flyover country, and he was criticizing the left), and discovered them from a Reddit recommendation on a subreddit recommended to me by a high school friend, also from flyover country.
Polyamory is also widespread, yes under that name, among gay zoomers just about anywhere, so if you’re young and know anyone who’s gay (and there’s a lot of zoomers who identify as gay), you have a good chance of coming across it.
This is a second-hand anecdote, but my mother does hiring at a small organization here in flyover country and had a hilarious, if disastrous, job interview where the candidate told her he was polyamorous. He did not get the job.
This stuff is spreading. It’s not just in San Francisco any more.
I disagree with Tree, but what he said isn’t entirely false about where the criticism comes from. But all the gory details definitely suggest some of the posters are insiders.
I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.
While this is technically correct (the best kind of correct!), and @TheAntipopulist's post did imply an exponential growth (i.e. linear in a log plot) in compute forever, while filling your light cone with classical computers only scales with t^3 (and building a galaxy-spanning quantum computer with t^3 qbits will have other drawbacks and probably also not offer exponentially increasing computing power), I do not think this is very practically relevant.
Imagine Europe ca. 1700. A big meteor has hit the Earth and temperatures are dropping. Suddenly a Frenchman called Guillaume Amontons publishes an article "Good news everyone! Temperatures will not continue to decrease at the current rate forever!" -- sure, he is technically correct, but as far as the question of the Earth sustaining human life is concerned, it is utterly irrelevant.
A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh.
I am not sure that anchoring on humans for what can be achieved regarding energy efficiency is wise. As another analogy, a human can move way faster under his own power than its evolutionary design specs would suggest if you give him a bike and a good road.
Evolution worked with what it had, and neither bikes nor chip fabs were a thing in the ancestral environment.
Given that Landauer's principle was recently featured on SMBC, we can use it to estimate how much useful computation we could do in the solar system.
The Sun has a radius of about 7e8 m and a surface temperature of 5700K. We will build a slightly larger sphere around it, with a radius of 1AU (1.5e11 m). Per Stefan–Boltzmann, the radiation power emitted from a black body is proportional to its area times its temperature to the fourth power, so if we increase the radius by a factor of 214, we should increase the reduce the temperature by a factor of sqrt(214), which is about 15 to dissipate the same energy. (This gets us 390K, which is notably warmer than the 300K we have on Earth, but plausible enough.)
At that temperature, erasing a bit will cost us 5e-21 Joule. The luminosity of the Sun is 3.8e26 W. Let us assume that we can only use 1e26W of that, a bit more than a quarter, the rest is not in our favorite color or required to power blinkenlights or whatever.
This leaves us with 2e46 bit erasing operations per second. If a floating point operation erases 200 bits, that is 1e44 flop/s.
Let us put this in perspective. If Facebook used 4e25 flop to train Llama-3.1-405B, and they required 100 days to do so, that would mean that their datacenter offers 1e20 flop/s. So we have a rough factor of Avogadro's number between what Facebook is using and what the inner solar system offers.
Building a sphere of 1AU radius seems like a lot of work, so we can also consider what happens when we stay within our gravity well. From the perspective of the Sun, Earth covers perhaps 4.4e-10 of the night sky. Let us generously say we can only harvest 1e-10 of the Sun's light output on Earth. This still means that Zuck and Altman can increase their computation power by 14 orders of magnitude before they need space travel, as far as fundamental physical limitations are concerned.
TL;DR: just because hard fundamental limitations exist for something, it does not mean that they are relevant.
There's been a weird narrative push here lately to blame Christianity for the worst parts of leftism (see the similar "akshally Communism comes from Christianity" upthread).
There’s a broader schism in the right-wing over whether it should be religious or irreligious. “Your ideas are actually the foundation of our shared enemy’s ideas” is a great line to use in that kind of conflict. As is, “your ideas are actually indistinguishable from the shared great evil everyone hates,” which was the Hlynkian thesis.
But I don’t hear about her from reactionary Twitter, I hear about her from Rationalists and the Rationalist adjacent. In other words, people who are disproportionately not normal Christians and who are weighted towards Silicon Valley.
Also, let’s be honest, your post was very clearly accusing people of being hypocritical perverts who denounce Aella in public whilst having wet dreams about her private. I have many hypocrisies but that is not one of them, and I am telling you plainly that I think your model is wrong.
‘Fun is bad and you’ll pay,’ ignores the very real dissatisfactions and disillusionments that have spread through society in the wake of Free Love.
More options
Context Copy link