ControlsFreak
No bio...
User ID: 1422
This one is so conceptually difficult. It's super easy to let the economist's mindset take over and view everything as "exchange" or "working". When we got married, my wife lived in the US with me but was not legally able to work for a period of time. I was also required by the government to support her. Of course, since I was working and she wasn't, she's not going to just sit around and drink mai tais all day. She made some meals, did dishes or laundry or whatever. Just stuff around the house to keep herself occupied, while also obviously doing other things, too. Is that "working"? Should we deport every single one of those people who legally come here, on a legal path to being authorized to work, if they so much as lift a finger to put their spouse's dishes in the dishwasher one time? I have to imagine that most people think obviously not.
On the other hand, there are obviously schemes in place where people essentially hire a housekeeper under the table. Distinguishing between different types of situations and what "counts" as "working" is extremely hard in general.
Different things work for different people. Like any other behavioral change, it's nearly impossible to have a single across-the-board strategy that is going to work for everyone. Some that I've heard of or have worked for people I know:
-
Pick a gym on your way to work (if you're lucky and have one available), so if you take a shower there, it at least cuts out the extra driving time, and every time you drive by, it's a reminder
-
Spouse or other training buddy; some people like the 'accountability' of committing to meeting with someone else at a particular time
-
The various suggestions around turning something into a 'habit'; I know a bunch of people who like the whole 'atomic habits' thing
-
Similarly, whole groups of folks like SMART goals; there are various ways to track your numbers, and you can pick a method that helps you get a 'win' nearly every time you go
-
"Change your identity"; I've heard that various folks feel like they just have to change their self-perception. "I'm a person who does X."
-
Rational analysis of costs/benefits; they say that if exercise were a drug, it would be the single most effective drug we've ever had at increasing a whole slew of health measures; a particular one that some people care about is old age quality of life, they really want to be able to play with or pick up their grandchildren or whatever
-
There is some research on this, and people have tried to put together some conceptual scaffolding; might be worth checking out. I learned about some of it here and here; I'd also pay attention to the discussion on flexibility and versions of "slack"
-
Obviously, life is full of tradeoffs, but instead of looking at the next highest value use of your time, think about how you could identify and trade off some of the lowest value uses of your time
it's impossible to answer your original question of whether the board's actions against the CEO are "Fascist Authoritarian" or not
Ok, great. Glad to know that you would not be able to conclude that either side in the example scenario is a "Fascist Authoritarian". Now hopefully we'll find out what @WandererintheWilderness thinks we can conclude.
You might notice that neither side in my example scenario had any political descriptors attached.
Would you, personally, use the form of reasoning you're describing and come to the conclusion that one or the other side in my example scenario is "Fascist Authoritarian"? If so, please describe how you used that reasoning to reach that conclusion.
...I hate to just ask again, but, uh, how high have you tried? My general belief is that supply curves slope upwards.
If the board of a company fires the CEO, but he tries to lock the doors to the building and hole up inside, so the board calls the police and has him evicted ("at gunpoint"), does that make them "Fascist Authoritarians"?
at any reasonable wage
How high have you tried?
Only a couple minor responses, as I think we're mostly understanding each other.
this is the sense in which I don't see a reachable point where honesty and bargaining come to strictly dominate.
My only quibble is that I don't think we really need the "honesty and" part. The question really is whether, even with dishonesty, bargaining can be achieved.
As a note I do expect that bargaining frictions will be reduced, but the existential question is whether they will be reduced by a factor large enough to compensate for the increased destructiveness of a conflict that escalates out of control.
The weirdly good thing about the increase in destructiveness ("good" only in the narrow sense of bargaining and likelihood of war, not necessarily in general) is that this increases costs to both sides in the event of war. As such, it increases the range of possible bargaining solutions that keep the peace. Both factors (this and the reduced bargaining frictions) should decrease the likelihood of war.
Verily, it seems like things are proceeding about as I predicted over a year ago. I pointed out in a parent comment that the #Resisters suffer a coordination game problem, and they lack any clear object to coordinate around. There is unlikely to be a singular event that causes all of the resisting bureaucrats to all simultaneously stick their necks out to create a large conflagration where they plausibly have more resources and power that they can bring to bear than the President. Instead, when USIP tries to resist, other bureaucrats sit on the sidelines and watch, perhaps wondering what will happen to them or if they can come up with a plan on their own. But they will not rush to allocate some alternative police force to protect USIP HQ. The head of USIP basically has to decide whether he or she is going to, on his/her own, resist and refuse to let the President's political will prevail.
...but most like I predicted, if and when it comes to the point of, "We're not going to let you into the building," the President can clearly muster the raw force of boots to force the issue. There is roundable-to-zero chance that USIP's paltry security team is going to muster enough force or start shooting bullets. This just isn't the way that the war with the bureaucracy will be fought. If an agency pulls a minor stunt to not let them into the building, the President can and will have his team show up with a very minor show of force, and that will basically be the end of that form of resistance.
Of course, they will take it to the courts, and there, battles can go different ways. Different agencies have different statutes passed by Congress, and different particular legal battles may be resolved in different ways. For the most part, the primary questions are going to revolve around the judiciary, to what extent the executive complies, on what timescales, etc. We see that playing out in other domains. "Some silly bureaucrats think they can #resist by just locking the doors to the building," was never a plausible path.
Postscript. Matt Levine sometimes talks about the question of, "Who really controls a company?" Often, this comes up for him in battles between CEOs and boards, where they're like both trying to fire each other. Similarly, there are about zero successful attempts of the type "he had the keys to the building, so he locked the doors". However, he notes that sometimes, things like, "He's the only one who has the passwords to access their bank accounts," or whatever, tend to be more annoying. Sure, you can eventually go through the courts and get them to order the bank to turn control over to whoever, but banks are reluctant to take that sort of action on their own without a court involved. Obviously, situations like, "They hold the only keys to MicroStrategy's vault of Bitcoin or the encrypted vault that contains their core product," or whatever may be even more contentious. Fun to think about sometimes, but yeah, "We locked the physical doors," is basically never a viable strategy.
I mean, I kinda get your point that it's the way that he thinks about it, but he also says that it gives us straightforward bounds:
A paperclip-maximizing superintelligence is nowhere near as powerful as a paperclip-maximizing time machine. The time machine can do the equivalent of buying winning lottery tickets from lottery machines that have been thermodynamically randomized; a superintelligence can’t, at least not directly without rigging the lottery or whatever.
But a paperclip-maximizing strong general superintelligence is epistemically and instrumentally efficient, relative to you, or to me. Any time we see it can get at least X paperclips by doing Y, we should expect that it gets X or more paperclips by doing Y or something that leads to even more paperclips than that, because it’s not going to miss the strategy we see.
So in that sense, searching our own brains for how a time machine would get paperclips, asking ourselves how many paperclips are in principle possible and how they could be obtained, is a way of getting our own brains to consider lower bounds on the problem without the implicit stupidity assertions that our brains unwittingly use to constrain story characters. Part of the point of telling people to think about time machines instead of superintelligences was to get past the ways they imagine superintelligences being stupid. Of course that didn’t work either, but it was worth a try.
So, I guess, like, think about the best possible plans you could come up with to put some error bars on the expected value of war. Perhaps notice that political scientists don't just ask the question, "Why is there war at all?" (...coming up with the answer involving bargaining frictions...) but also the question of why war is actually still somewhat rare, especially if we think about all of the substantive disagreements there are out there. They point out that the vast majority of wars that are started actually end surprisingly quickly, often as some information is learned in the process, a settlement is quickly reached. Superintelligences are going to be wayyyyyyyyy better at driving down those error bars and finding acceptable settlements.
This is where I'm appealing to things like the >90% draw rate in computer chess (when the starting positions are not specifically biased).
I think that's a fact particular to chess - I don't expect the same result in computer Go / othello / some other game that is less structurally prone to having draws.
I guess it's not the draws, themselves, that are "the thing". Let me try to put it another way. One of the top GMs in the world made a comment not too long ago about their experience working with very powerful computers. He said something along the lines of, "With the computer, it's always either zeros or winning." That is, he basically viewed it as that once you have enough computanium, for many many many positions, either the computer sees a way to essentially just straight equalize or it can see out to a win. Now, obviously, this is not strictly true, and it's obviously not true in all positions, as you get closer to the start of the game. But they can see the expected outcome sooo vastly better than we can. In the same way that people want to blow up that ability to things like "can engage in warfare sooo vastly better than we can", it should also blow up their ability to see expected outcomes and come to negotiated settlements sooo vastly better than we can.
I don't see how improvement in those models means that there is a reachable point where winning strategies switch from being based on deception and trickery to being based on cooperation stemming from mutual knowledge of each others' strategies
The attempted resolution in the financial markets paradox is that people just stop investing in more information. Could they double down on deception and trickery? Perhaps. But that seems like an unlikely result, game-theoretically. "Babbling equilibrium" or "cheap talk" are sometimes invoked, depending on the specific formalization. There are others that aren't in that wiki article. I could walk through a bunch of different models for how humans try to deal with deception and trickery in different domains. Presumably a superintelligence will know all of them and more... and execute even better in implementing them. It took me a long time to realize this, but when you think of deception and trickery as part of the strategy set, then the correct game-theoretic notion of equilibrium is not necessarily "cooperation stemming from mutual knowledge of each others' strategy", but "the appropriate equilibrium stemming from mutual knowledge of each others' strategy, which may contain deception and trickery, and you are each reasoning about the other's ability to engage in deception and trickery, the value the other may obtain from such, etc." Of course I know that my opponent may try deception and trickery, so I need to reason about it. A superintelligence will reason about it even better. Probably the easiest thing to think about here is again the game Diplomacy.
Where the mere game of Diplomacy differs from actual war in the real world is that we have good reason to believe that the costs of engaging in war are much much much higher, so we have a very big bargaining range, and we need quite significant bargaining frictions to get in the way. I still don't see how a superintelligence doesn't reduce the bargaining friction.
For the record, you don't have a problem with me. You have a problem with the people who hold the position that we are approaching an AI singularity and that doom is inevitable because the AI will have all these incredible characteristics. I don't actually hold that position; I'm just investigating it.
In any event, I again don't think it needs to be actually omniscient. It just needs to be able to reduce error bounds enough to eliminate the bargaining friction. Since war is very costly, it certainly doesn't need to be perfect; it just needs to get the error bars down enough. Think of it as a continuum. As the ability to gather information, model, and predict accurately goes up, the likelihood of war goes down, since the bargaining frictions due to uncertainty are reduced. Yes yes, it may be only when we take the limit that the likelihood of war goes down to precisely zero. I'm not even quite sure of that, because since war is so costly, we can probably still tolerate a fair amount of uncertainty and still remain in a region where settlements can be negotiated.
The AI singularity/doom people think that, for all intents and purposes, we're headed for that limit. They may be wrong. But if one believes their premise, then I think the conclusion would be that war goes to zero.
How do you know?
I don't know! I'm just temporarily importing my understanding of the tenets held by the singulatarian doomerists. They seem convinced that there's nothing we can do, not militarily, not intelligence community, not nothing, to even hold a candle in comparison to how good it's going to be at executing. Presumably, a part of its ability to be so good is going to be understanding the world around it with significantly smaller error bars than we currently have. I don't think they even need it to be completely zero error bars; just that it's wayyyy better than ours. What I think is related is that we don't need to have perfectly zero error bars in order to avert war; we just need small enough error bars to overcome the bargaining frictions. Given the high costs of war, that seems pretty feasible.
I say the god AI will not exist.
This is sort of the crux. I happen to agree with you. The point of my comment was to investigate the tenets of a group of folks and see what the implications are. I think that if one adopts a position like in that Scott quote, then the implication is something like the end of war.
Can you name three people who would agree that they make this "widely" held prediction?
Probably not. I don't keep track of names of people. Obviously, there's Big Yud. I quoted Scott below. I'd have to wade further into those doomerist circles to get a third name, and meh.
The important question is whether they are effectively perfect executors compared to each other.
This is where I'm appealing to things like the >90% draw rate in computer chess (when the starting positions are not specifically biased). We also see something similar in the main anti-inductive system that I'm making comparison to - financial markets. At one point, I had heard that an offhand estimate of how long a good trading idea lasts before it's discovered and proliferated is like 18 months. The models just keep getting better.
If you pit two top engines against each other, you won't have any idea who will win. You know it'll be a coin toss but you won't know who will win.
Emphasis added. I don't need to know in order for the AI to tell me that the best outcome is a negotiated settlement within certain parameters.
the opponent's moves are still unknown.
Agreed, but sort of irrelevant. The chess engine is still executing perfectly, even though it doesn't actually know what moves the opponent will ultimately make.
Playing a game well is one thing, but solving a game (determining if a player can force a win) is entirely harder. Checkers, tic-tac-toe, and connect four are solved, while chess is not.
I think the answer here is again that it is ultimately irrelevant. We didn't need to solve chess or diplomacy to have an engine become a nearly perfect executor or to narrow the range of outcomes significantly (>90% draws unless you extremely bias the starting positions, for example).
War is about using force to achieve a political goal.
That would be the substantive disagreement part. Classical theory says that that's not enough for war. You also need a bargaining friction, otherwise, you'll get a negotiated settlement.
Nice find!
I quoted Scott below, but yes, everyone in the Big Yud singularity doomerist community. My post is taking one of their tenets seriously and seeing the implications. My sense is that they won't be particularly happy with such implications. Of course, part of the bit is exposing that many many people don't believe their tenets, surfacing that disagreement, with a clear application of how it contrasts with their other claims.
I think the response would be that you don't need arbitrary precision. You just need enough to get within a pretty wide range of bargaining solutions. That may be doable at a higher level of abstraction, and a perfect executing AI can find that proper level of abstraction.
Of course, this process might not even look like finding the right level of abstraction to our eyes. In chess, grandmasters sometimes look at computer moves, and they struggle to contextualize it within a level of abstraction that makes sense to them. Sometimes, they're able to, and they have an, "OHHHHHHHH, now I see what it's saying," even though it's not "saying".
If there is value in weeding out the bullshit, the omniscient AI will weed out the bullshit. AI already plays diplomacy, trying to weed out bullshit. Just increase the scale. The best bullshitting Diplomacy players will be mere Magnus Carlsens against it. The Chinese AI and the American AI will both compute all the way out to the draw, just like the TCEC.
I think you're doing the thing where you haven't internalized "the thing". From Scott:
Consider weight-lifting. Your success in weight-lifting seems like a pretty straightforward combination of your biology and your training. Weight-lifting retains its excitement because we don’t fully understand either. There’s still a chance that any random guy could turn out to have a hidden weight-lifting talent. Or that you could discover the perfect regimen that lets you make gains beyond what the rest of the world thinks possible.
Suppose we truly understood both of these factors. You could send your genes to 23andMe and receive a perfectly-accurate estimate of your weightlifting potential. And scientists had long since discovered the perfect training regimen (including the perfect training regimen for people with your exact genes/lifestyle/limitations). Then you could plug your genotype and training regimen into a computer and get the exact amount you’d be able to lift after one year, two years, etc. The computer is never wrong. Would weightlifting really be a sport anymore? A few people whose genes put them in the 99.999th percentile for potential would compete to see who could follow the training regimen most perfectly. One of them would miss a session for their mother’s funeral and drop out of the running; the other guy would win gold at whatever passed for this society’s Olympics. Doesn’t sound too exciting.
A team sport like baseball or soccer would be harder to solve. Maybe you’d have to resort to probabilistic estimates; given these two teams at this stadium, the chance of the Red Sox winning is 78.6%, because the model can’t predict which direction some random air gusts will go. I guess this is no worse than having Nate Silver making a betting model. But on the individual level, it’s still a combination of your (well understood) genes and (well understood) training regimen.
Hedge funds already have some of the best weather models in the world. There's alpha there right now. Or at least there was; I don't know how much has been anti-inducted away. The god AI will certainly be able to do at least as well. It will probably make our current best models look like a mere Magnus Carlsen. And if there's alpha in taking a more minute view, scoping the model in to a particular stadium, why can't it do that? Where there is alpha in the AI getting information, the AI will go there and get the information. It will be able to massively reduce the error bars. And all you need to get rid of war is reduce the error bars enough to get to a negotiated agreement. There's tons of alpha there, so there they will go. Until that alpha has been anti-inducted away, and we're right back in the paradox.
SMBC gets this close.
I've been thinking about the Grossman-Stiglitz Paradox recently. From the Wiki, it
argues perfectly informationally efficient markets are an impossibility since, if prices perfectly reflected available information, there is no profit to gathering information, in which case there would be little reason to trade and markets would eventually collapse.
That is, if everyone is already essentially omniscient, then there's no real payoff to investing in information. I was even already thinking about AI and warfare. The classical theory is that, in order to have war, one must have both a substantive disagreement and a bargaining friction. SMBC invokes two such bargaining frictions, both in terms of limited information - uncertainty involved in a power rising and the intentional concealment of strength.
Of course, SMBC does not seem to properly embrace the widely-held prediction that AI is going to become essentially omniscient. This is somewhat of a side prediction of the main prediction that it will be a nearly perfectly efficient executor. The typical analogy given for how perfectly efficient it will be as an executor, especially in comparison to humans, is to think about chess engines playing against Magnus Carlsen. The former is just so unthinkably better than the latter that it is effectively hopeless; the AI is effectively a perfect executor compared to us.
As such, there can be no such thing as a "rising power" that the AI does not understand. There can be no such thing as a human country concealing its strength from the AI. Even if we tried to implement a system that created fog of war chess, the perfect AI will simply hack the program and steal the information, if it is so valuable. Certainly, there is nothing we can do to prevent it from getting the valuable information it desires.
So maybe, some people might think, it will be omniscient AIs vs omniscient AIs. But, uh, we can just look at the Top Chess Engine Competition. They intentionally choose only starting positions that are biased enough toward one side or the other in order to get some decisive results, rather than having essentially all draws. Humans aren't going to be able to do that. The omniscient AIs will be able to plan everything out so far, so perfectly, that they will simply know what the result will be. Not necessarily all draws, but they'll know the expected outcome of war. And they'll know the costs. And they'll have no bargaining frictions in terms of uncertainties. After watching enough William Spaniel, this implies bargains and settlements everywhere.
Isn't the inevitable conclusion that we've got ourselves a good ol' fashioned paradox? Omniscient AI sure seems like it will, indeed, end war.
They're not backyard-maintainable, but nor are modern ICEs.
My sense is that the Toyota Dynamic Force engines are still mostly backyard-maintainable (there will always be a question of level of effort as well as some specific sub-systems), and they're pretty darn efficient. Seems they got there with just good old fashioned design optimization and only a couple additional computer-controlled subsystems.
It's not clear exactly what is happening on what sources of dollars; there are a bunch of different numbers in the article, and they're mostly unattached to any particular mechanisms. It may be only $400M out of $5B. It's not clear if it's just some funding agencies or some other criteria. My guess from the following sentence is that it's currently just some funding agencies:
The Departments of Education and Health and Human Services plan to immediately issue stop-work orders on grants to the school, the task force said.
That would make sense, as DoE/HHS are a very small part of federal research funding.
One thing to note is that a "stop-work order" is a particularly harsh tool. Rather than simply defunding the agencies, so that there simply aren't new grants to go around (and no one knows how they can change behavior to improve the situation), a stop-work order says that the university must completely stop doing anything related to an existing grant. They certainly can't spend any of the money, not even on grad student salaries. It must grind to a halt.
I have heard about this sort of thing happening before. Back when the gov't started getting serious about China's influence in academia, they started requiring a bunch of disclosures about China-related stuff. Apparently, one guy at one university screwed up badly enough that they issued a stop-work order to everything the university did with their federal funding until they could sort everything out. At the same time, they were even prosecuting professors if they weren't disclosing. The message was clear that the gov't took this stuff seriously, and if anyone screwed up, then everyone, at the institutional level, paid the price. As I put it here, that makes the game theory pretty easy. If you're a top tier talent, you can't afford to FAFO with some university that can't get it together at an institutional level, no matter what else they might offer you.
Of course, right now, this seems to be limited just to antisemitism (and so far, just Columbia) rather than extending to further bad behavior in academia. I, of course, proposed doing this type of thing for when a university, at an institutional level, does basically anything that discriminates on the basis of race/gender (and I got a lot of downvotes here for saying that such a plan was way better then indiscriminate "chemo", just shutting stuff down randomly with no incentive for changing behavior). Maybe it'll come, and this is just the trial balloon. It could make sense to start with one that is over-the-top egregious. Even Scott Aaronson, who is famously over-the-top performative anti-Trump, went with this:
For the past year and a half, Columbia University was a pretty scary place to be an Israeli or pro-Israel Jew—at least, according to Columbia’s own antisemitism task force report, the firsthand reports of my Jewish friends and colleagues at Columbia, and everything else I gleaned from sources I trust. The situation seems to have been notably worse there than at most American universities. ... Last year, I decided to stop advising Jewish and Israeli students to go to Columbia, or at any rate, to give them very clear warnings about it. I did this with extreme reluctance, as the Columbia CS department happens to have some of my dearest colleagues in the world, many of whom I know feel just as I do about this.
He also sort of grudgingly accepted some game theory:
Time for some game theory. Consider the following three possible outcomes:
(a) Columbia gets back all its funding by seriously enforcing its rules (e.g., expelling students who threatened violence against Jews), and I can again tell Jewish and Israeli students to attend Columbia with zero hesitation
(b) Everything continues just like before
(c) Columbia loses its federal funding, essentially shuts down its math and science research, and becomes a shadow of what it was
Now let’s say that I assign values of 100 to (a), 50 to (b), and -1000 to (c). This means that, if (say) Columbia’s humanities professors told me that my only options were (b) and (c), I would always flinch and choose (b). And thus, I assume, the professors would tell me my only options were (b) and (c). They’d know I’d never hold a knife to their throat and make them choose between (a) and (c), because I’d fear they’d actually choose (c), an outcome I probably want even less than they do.
Having said that: if, through no fault of my own, some mobster held a knife to their throat and made them choose between (a) and (c)—then I’d certainly advise them to pick (a)! Crucially, this doesn’t mean that I’d endorse the mobster’s tactics, or even that I’d feel confident that the knife won’t be at my own throat tomorrow. It simply means that you should still do the right thing, even if for complicated reasons, you were blackmailed into doing the right thing by a figure of almost cartoonish evil.
This is what I have been saying. Use the tools that you have. Don't use them indiscriminately. Don't imagine that you're doing chemotherapy in just randomly attacking everything. Tailor them specifically to very very clearly change the incentives so that universities need to change at an institutional level and that if they don't, individual talent has a huge incentive to just leave them.
Now, of course, one always has to worry a bit about how when something is done by the stroke of a pen, it can be reversed by the stroke of a pen of the other guy (or an equal and opposite "Dear Colleague," letter). But solutions to that problem are much harder to come by.
I wear headphones at the gym and always have a podcast on. When my wife first started coming with me, I didn't wear them, because we were talking more, I was teaching her, etc. Now, we still talk a bit here and there, but I'm usually listening or doing flash cards or something on my phone between sets. She literally brings a laptop and sends work emails or reads books or whatever between sets. It does help that it's not a super high traffic gym (and we go when it's lighter in the morning), so there's not a lot of pressure to hurry. I've seen people who seem to be on actual voice phone calls, which is mildly bad gym etiquette, but they've all talked at low volumes, so I haven't really minded much.
I would observe that these are pretty mild concerns in my mind. Like I said, we go in the morning, and for me, it's almost as much time for my mind to wake up and just get my body going for the day. Would I instead be sitting around, having a cup of coffee, reading something, and not really doing much while I'm really just sort of slowly getting my mind awake for the day? Why not have a cup of coffee at the gym, listen to something, and also move my body/get some exercise in instead? By the end, I'm alert and ready to go for the day; in fact, I kinda feel less good during the day if I don't go to the gym in the morning. Exercise literally is a hellova drug, just one that is really really good for you.
I really feel like these mild concerns can be pretty easily overcome, even by just finding a training buddy... at least, once you've decided that you are going to incorporate it into your lifestyle. The much bigger barrier is, "Am I going to do this?" not, "How am I going to slightly improve the quality of this?"
More options
Context Copy link