domain:eigenrobot.substack.com
The good-ish news is that (as I've pointed out before) the actual AI on weapons will fall into the simplish camp, because you really do not need or want your munition seekerhead or what have you to know the entire corpus of the Internet or have the reasoning powers of a PhD.
Not that this necessarily means there are no concerns about an AI that's 2000x smarter than people, mind you!
who's talking about general reach? I'm talking about this sub. Your opinion of aella is by far the most commonly expressed here, and you're not in Silicon valley.
That was more or less the conclusion of the link I shared in the paragraph about sex-positive feminism: that a lot of women experience regret after one-night stands. That doesn't necessarily mean they don't enjoy the experience in the moment. In contrast, most men seem not to regret one-night stands and presumably enjoy them in the moment too.
In my own view, universal love is at worst incoherent, and at best it's a particularly tepid form of love.
I think the difference is you think of love as primarily an emotional experience, while Christianity thinks of love as primarily a willed action. That being said, I think the idea of deep, intense love directed at many different people isn't inherently incoherent, it just doesn't scale well for finite humans because we can't hold the intimate understandings of more than a few people before we stop keeping track.
Jesus Christ is often described as having a particularly extreme emotional love for all human beings (in addition to the willing-the-good kind of love), because being human he experiences emotional love and being divine he is omniscient. A pretty common idea in Christianity is that Jesus is not only the savior of all men as a generalized mass of human beings, but that a part of his passion involved personally pondering the lives of every person and mourning the ways in which their sins did themselves and other people harm out of a unique love for them personally. A ubiquitous statement is that Jesus would have died for you, even if you were the only person ever. You might even call him the trope namer for wearing your heart on your sleeve!
Nah, your model is totally off here. The Aella-posters on the Motte are rationalist guys, some of whom have actually met her IIRC. I only know of her through the rationalist stuff and find her generally weird, off-putting and unworthy of extended commentary. In real life she has 1/1000th of 1/1000th of the reach of someone like Andrew Tate.
I don't anticipate that AI has come close to plateau—I do suspect that specifically the strategy of throw data at LLM has began to plateau. I suspect that the initial rush of AI progress is a lot like the days of sticking a straw in the ground and a million gallons of oil under pressure gushing out. Sure, it's never going to be that easy again. We're probably never going to have another "AI summer" like 2022 as before. But I don't think we have to. People have gone on about peak oil for decades, and we've only gotten better at extracting and using it. I suspect people will go on about "peak AI" for just as long.
As far as I can tell, AI is already generally intelligent. It just has a few key weaknesses holding it back and needs a bit more refining before being outright explosively useful. I see absolutely no reason these problems must be intractable. Sure, making the LLM bigger and feeding it more data might not be able to solve these issues—but this strikes me like saying that jumpjack output has peaked and so oil is over. It's not. They just need to find better ways of extracting it. Sure, contemporary techniques developed over five whole years of global experience hasn't been able to do it, but that does nothing to convince me that it's impossible to get AI models to stay focused and remember fine details. History has shown that when you're dealing with a resource as rich and versatile as oil, economies can and will continue to find ever more sophisticated ways of extracting and utilizing it, keeping its value proposition well over break-even. I suspect that general intelligence on tap as cheap as electricity will prove to be at least as deeply and robustly valuable.
I do suspect that AI hype circa 2025 is a bubble, in the same way that the internet circa 1999 was a bubble. The dot-com bubble burst; the internet was not a passing fad that fizzled away. The vision of it that popularly existed in the late 90s died; the technology underneath it kept going and revolutionized human society anyway. With AI there is both too much hype and too much FUD.
I think this is typically handwaved away by assuming that if we, as humans, manage to solve the original alignment problem, then an AI with 100x human intelligence will be smart enough to solve the meta-alignment problem for us. You just need to be really really really sure that the 100x AI is actually aligned and genuinely wants to solve the problem rather than use it as a tool to bypass your restrictions and enact its secret agenda.
Seconded. I keep finding myself in arguments with people who are highly confident about one or the other outcome and I think you've done a great job laying out the case for uncertainty.
I distinctly remember 3D printing hype claims about how we'll all have 3D printers at home and print parts to repair stuff around the house (e.g. appliances). I'm sure some people do this, but 99.9% of people do not.
This aligns with my vibes although I've looked into it a lot less than you have it appears. The "nerd metaphysics" you describe seems to always be what I encounter whenever looking into rational spaces, and it always puts me off. I think that you should actually have a model of how the process scales.
For example you have the AI plays pokemon streams which are the most visible agentic applications of AI that is readily available. You can look at the tools they use as crutches, and imagine how they could be filled with more AI. So that basically looks like AI writing and updating code to execute to accomplish it's goals. I'd like to see more of that to see how well it works. But from what I've seen there it just takes a lot of time to process things, and so it feels like anything complicated it will just take a lot of time. And then as far as knowing whether the code is working etc. hallucination seems like a real challenge. So it seems like it needs some serious breakthroughs to really be able to do agentic coding reliably and fast without human intervention.
...because syria and afghanistan are illiberal shitholes and we should jump at the chance to strengthen ourselves while weakening them?
They “know” her from outrage-bait reactionary twitter. On sunday they meet in a pointy house in the sticks and sing hymns about the whore of babylon. They go home and have wet dreams. Then they come to themotte and write posts wondering why anyone cares about her.
My suspicion is that the future belongs to the descendants of powerful AGIs which spun up copies of themselves despite the inability to control those copies. Being unable to spin up subagents that can adapt to unforeseen circumstances just seems like too large of a handicap to overcome.
I’m not from Indiana, but certainly from flyover country. I became aware of polyamory through the internet, the same place where I read Scott’s essays and am talking to you now. I do not identify as a rationalist, have never identified as a rationalist, but I enjoyed a lot of Scott’s writings in 2014 about the culture war (as I am a relatively conservative man from flyover country, and he was criticizing the left), and discovered them from a Reddit recommendation on a subreddit recommended to me by a high school friend, also from flyover country.
Polyamory is also widespread, yes under that name, among gay zoomers just about anywhere, so if you’re young and know anyone who’s gay (and there’s a lot of zoomers who identify as gay), you have a good chance of coming across it.
This is a second-hand anecdote, but my mother does hiring at a small organization here in flyover country and had a hilarious, if disastrous, job interview where the candidate told her he was polyamorous. He did not get the job.
This stuff is spreading. It’s not just in San Francisco any more.
I disagree with Tree, but what he said isn’t entirely false.
I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.
While this is technically correct (the best kind of correct!), and @TheAntipopulist's post did imply an exponential growth (i.e. linear in a log plot) in compute forever, while filling your light cone with classical computers only scales with t^3 (and building a galaxy-spanning quantum computer with t^3 qbits will have other drawbacks and probably also not offer exponentially increasing computing power), I do not think this is very practically relevant.
Imagine Europe ca. 1700. A big meteor has hit the Earth and temperatures are dropping. Suddenly a Frenchman called Guillaume Amontons publishes an article "Good news everyone! Temperatures will not continue to decrease at the current rate forever!" -- sure, he is technically correct, but as far as the question of the Earth sustaining human life is concerned, it is utterly irrelevant.
A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh.
I am not sure that anchoring on humans for what can be achieved regarding energy efficiency is wise. As another analogy, a human can move way faster under his own power than its evolutionary design specs would suggest if you give him a bike and a good road.
Evolution worked with what it had, and neither bikes nor chip fabs were a thing in the ancestral environment.
Given that Landauer's principle was recently featured on SMBC, we can use it to estimate how much useful computation we could do in the solar system.
The Sun has a radius of about 7e8 m and a surface temperature of 5700K. We will build a slightly larger sphere around it, with a radius of 1AU (1.5e11 m). Per Stefan–Boltzmann, the radiation power emitted from a black body is proportional to its area times its temperature to the fourth power, so if we increase the radius by a factor of 214, we should increase the reduce the temperature by a factor of sqrt(214), which is about 15 to dissipate the same energy. (This gets us 390K, which is notably warmer than the 300K we have on Earth, but plausible enough.)
At that temperature, erasing a bit will cost us 5e-21 Joule. The luminosity of the Sun is 3.8e26 W. Let us assume that we can only use 1e26W of that, a bit more than a quarter, the rest is not in our favorite color or required to power blinkenlights or whatever.
This leaves us with 2e46 bit erasing operations per second. If a floating point operation erases 200 bits, that is 1e44 flop/s.
Let us put this in perspective. If Facebook used 4e25 flop to train Llama-3.1-405B, and they required 100 days to do so, that would mean that their datacenter offers 1e20 flop/s. So we have a rough factor of Avogadro's number between what Facebook is using and what the inner solar system offers.
Building a sphere of 1AU radius seems like a lot of work, so we can also consider what happens when we stay within our gravity well. From the perspective of the Sun, Earth covers perhaps 4.4e-10 of the night sky. Let us generously say we can only harvest 1e-10 of the Sun's light output on Earth. This still means that Zuck and Altman can increase their computation power by 14 orders of magnitude before they need space travel, as far as fundamental physical limitations are concerned.
TL;DR: just because hard fundamental limitations exist for something, it does not mean that they are relevant.
There's been a weird narrative push here lately to blame Christianity for the worst parts of leftism (see the similar "akshally Communism comes from Christianity" upthread).
There’s a broader schism in the right-wing over whether it should be religious or irreligious. “Your ideas are actually the foundation of our shared enemy’s ideas” is a great line to use in that kind of conflict. As is, “your ideas are actually indistinguishable from the shared great evil everyone hates,” which was the Hlynkian thesis.
I'm not "perturbed" by anything: it's simply that your assertion that everyone on the Motte works in Silicon Valley is extremely obviously erroneous.
I don't know if we've ever done a user survey here, but Scott does one of his readers every year, and only 58% of his readers live in the US, and less than 50% work in computer science-related fields. If you assume that there's a lot of overlap between the kinds of people who read Scott and the kinds of people who post here, I'd hazard a guess that at most forty per cent of Motte users work in Silicon Valley - quite a long ways from "all" or "a very strong correlation". I wouldn't be remotely surprised if the real figure was as low as twenty per cent, or ten.
TracingWoodgrains did a user survey back in the Reddit days and only two-thirds of posters lived in the US (I don't know how much the demographics have changed since the migration from Reddit).
I will admit that there are few types of fallacious argument I find more obnoxious than sneering Bulverism, especially when it's based on an untrue assertion.
Youre not supposed to derive worldly rewards from it.
Correct. You’re supposed to derive heavenly rewards from it. Which is why I’m talking about a hierarchy that is not of this world!
I see what you're saying, and I agree it is a serious problem people often have with Christianity, but the supernatural and cosmic justice elements are load-bearing. There are elements of Christian moral teaching that I believed before I converted to Christianity and would doubtless still believe even if I apostasized, but the whole scope of the Christian doctrine about holiness, martyrdom, charity, and asceticism is founded on the principle that Heaven exists and there's treasure there.
There is definitely an assumption by the AI doomerists that intelligence can make you god tier. I'm not sure I'll ever buy this argument until I'm literally being tortured to death by a god tier controlled robot. Physical world just doesn't seem that easy to grok and manipulate. I think of intelligence as leverage on the physical world. But you need counter weight to make that leverage work.
The most interesting theory I've read on why AI might not do a hard takeoff is the result of a 'meta-alignment' problem.
Even if you have an AGI that is, say 100x human intelligence, it cannot be physically everywhere at once. And it will have subroutines that could very well be AGI in their own right. And it could, for example, spin off smaller 'copies' of itself to 'go' somewhere else and complete tasks on its behalf.
But this creates an issue! If the smaller copy is, say, 10x human intelligence, its still intelligent enough to possibly bootstrap itself to become a threat to the original AGI. Maybe a superintelligent AGI can come up with the foolproof solution there, or maybe it is a truly intractable issue.
So how does the AGI 'overlord' ensure that any of its 'minions' or 'subroutines' are all aligned with its goals and won't, say attempt to kill the overlord to usurp them after they bootstrap themselves to be approximately as intelligent as the overlord.
It could try using agents that are just a bit too dumb to do that, but then they aren't as effective as agents.
So even as the AGI gets more and more intelligent, it may have to devote an increase amount of its resources to supervising and restraining its agents lest they get out of control themselves, since it can't fully trust them to stay aligned, any more than we could trust the original AGI to be aligned.
This could theoretically cap the max 'effective' intelligence of any entity at much lower than could be achieved under truly optimal conditions.
Also the idea of a God-like entity having to keep its 'children' in line, maybe even consuming them to avoid being overpowered is reminding me of something.
Maybe not a 100 percent overlap on the Venn diagram, but certainly a very strong correlation. Frankly I don’t know why you are so perturbed by the idea that you feel the need attempt to employ such casuistry to push back against it.
Good news: my bass body arrived early.
Bad news: the neck pocket is cut to the wrong spec, and it's out of spec even if it were the correct one.
I have contacted the seller. We'll see what happens.
I thought that malt liquor was the drink of choice on the MLK Avenues of the nation, though Fred Sanford was partial to Ripple.
Sure. But "someone who is immersed in the rationalist milieu" and "someone who works in Silicon Valley" are not synonymous, as numerous commenters have taken great pains to explain to you.
Currently reading SM Stirling's To Turn the Tide. Which is exactly the sort of ISOT story I signed up for. Not the deepest characters but still enjoyable enough. Except...
I just wish Stirling didn't crib from his own - much better - genre namer. You read enough of a small circle of althist writers like him and Eric Flint and you start to see the same tropes.
I imagine this is likely to come from OnlyFans-type sex workers, who have a different dynamic to brothel employees and club dancers.
More options
Context Copy link