This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.
A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh. An industrial city of a million people per Quora uses 11.45 billion KWH a year. So if you have something like this you’re going to need a significant investment in building the data center, powering it, cooling it, etc. this isn’t easy, probably doable if you’re convinced it’s a sure thing and the answers are worth it.
As to the second question, im not sure that all problems can be solved, there are some things in mathematics that are considered extremely difficult if not impossible. And a lot of social problems are a matter of balancing priorities more the than really a question of intellectual ability.
As to the third question, I think it’s highly unlikely that the most likely people to successfully build a human or above level AI are people who would be least concerned with alignment. The military exists in short to make enemies dead. They don’t want an AI that is going to get morally superior when told to bomb someone. I’m suspecting the same is true of business in some cases. Health insurance companies are already using AI to evaluate claims. They don’t want one that will approve expensive treatments. And so there’s a hidden second question of whether early adopters have the same ideas about alignment that we assume they do. They probably don’t.
While this is technically correct (the best kind of correct!), and @TheAntipopulist's post did imply an exponential growth (i.e. linear in a log plot) in compute forever, while filling your light cone with classical computers only scales with t^3 (and building a galaxy-spanning quantum computer with t^3 qbits will have other drawbacks and probably also not offer exponentially increasing computing power), I do not think this is very practically relevant.
Imagine Europe ca. 1700. A big meteor has hit the Earth and temperatures are dropping. Suddenly a Frenchman called Guillaume Amontons publishes an article "Good news everyone! Temperatures will not continue to decrease at the current rate forever!" -- sure, he is technically correct, but as far as the question of the Earth sustaining human life is concerned, it is utterly irrelevant.
I am not sure that anchoring on humans for what can be achieved regarding energy efficiency is wise. As another analogy, a human can move way faster under his own power than its evolutionary design specs would suggest if you give him a bike and a good road.
Evolution worked with what it had, and neither bikes nor chip fabs were a thing in the ancestral environment.
Given that Landauer's principle was recently featured on SMBC, we can use it to estimate how much useful computation we could do in the solar system.
The Sun has a radius of about 7e8 m and a surface temperature of 5700K. We will build a slightly larger sphere around it, with a radius of 1AU (1.5e11 m). Per Stefan–Boltzmann, the radiation power emitted from a black body is proportional to its area times its temperature to the fourth power, so if we increase the radius by a factor of 214, we should increase the reduce the temperature by a factor of sqrt(214), which is about 15 to dissipate the same energy. (This gets us 390K, which is notably warmer than the 300K we have on Earth, but plausible enough.)
At that temperature, erasing a bit will cost us 5e-21 Joule. The luminosity of the Sun is 3.8e26 W. Let us assume that we can only use 1e26W of that, a bit more than a quarter, the rest is not in our favorite color or required to power blinkenlights or whatever.
This leaves us with 2e46 bit erasing operations per second. If a floating point operation erases 200 bits, that is 1e44 flop/s.
Let us put this in perspective. If Facebook used 4e25 flop to train Llama-3.1-405B, and they required 100 days to do so, that would mean that their datacenter offers 1e20 flop/s. So we have a rough factor of Avogadro's number between what Facebook is using and what the inner solar system offers.
Building a sphere of 1AU radius seems like a lot of work, so we can also consider what happens when we stay within our gravity well. From the perspective of the Sun, Earth covers perhaps 4.4e-10 of the night sky. Let us generously say we can only harvest 1e-10 of the Sun's light output on Earth. This still means that Zuck and Altman can increase their computation power by 14 orders of magnitude before they need space travel, as far as fundamental physical limitations are concerned.
TL;DR: just because hard fundamental limitations exist for something, it does not mean that they are relevant.
More options
Context Copy link
What if we get an AI so smart that it figures out a way to circumvent these particular laws of physics? I'm 50% joking with this follow-up question, and 50% serious.
The fact that there will be a big emphasis on designing AI to be able to bomb people without question is not exactly something that increases my confidence in alignment! I think the argument you're making here is more along the lines of 'following directions will always be a critical component of AI effectiveness, so the problem with largely solve itself'. I think that's somewhat plausible for simplish AI, but it gets less plausible for an AI that's 2000x smarter than people.
I mean I think the rub is that the alignment problem is actually two problems.
First, can an AI that is an agent in its own right be corralled in such a way that it’s not a threat to humans. I think it’s plausible. If you put in things that force it to respect human rights and dignity and safety, and you could prevent the AI from getting rid of those restrictions, sure, it makes sense.
Yet the second problem is the specific goals that the AI itself is designed for. If I have a machine to plan my wars, it has to be smart, it has to be a true AGI with goals. It does not, however have to care about human lives. In fact, such an AI works better without it. And that’s assuming an ethical group of people. Give Pinochet an AGI 500 times smarter than a human and it will absolutely harm humans in service of tge directive of keeping Pinochet in power.
Pinochet stepped down from power voluntarily. Like as a factual historical matter he clearly had goals other than 'remain in power at all costs'. I would point to 'defeat communism' and 'grow the Chilean economy', both worthy goals, as examples of things he probably prioritized over regime stability.
More options
Context Copy link
This is the danger that economists like Tyler Cowen say is most pressing, i.e. not some sci-fi scenario of Terminator killing us all, but of humans using AI as a tool in malicious ways. And yeah, if we don't get to omni-capable superintelligences then I'd say that would definitely be the main concern, although I wouldn't really know how to address it. Maybe turn off the datacenter access to 3rd world countries as part of sanctions packages? Maybe have police AI that counter them? It's hard to say when we don't know how far AI will go.
If the current state of the international arms market is any indication, large, relatively respectable countries like the US and Russia will give them to smaller sketchier allied countries like Pakistan and Iran, those sketchier allied countries will then give them to terrorist groups like Hezbollah and Lashkar-e-Taiba. So it might be pretty difficult to prevent. Also you have the problem of private firms in Europe and Asia selling them to rouge-ish nations like Lybia.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The good-ish news is that (as I've pointed out before) the actual AI on weapons will fall into the simplish camp, because you really do not need or want your munition seekerhead or what have you to know the entire corpus of the Internet or have the reasoning powers of a PhD.
Not that this necessarily means there are no concerns about an AI that's 2000x smarter than people, mind you!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link