I have a similar understanding of the published literature to you, I think - but knowing that planes crash when their altitude decreases is not enough to avoid crashing a plane. The published literature tells us, for example, that calories out should probably exceed calories in by about 500 and then you'll lose weight. But as I've heard in this thread there is no reliable way to measure either, calories out has been shown to change in response to calories in, so you are in effect chasing a constantly moving target.
Dictating the food that one eats and the calories one expends is nowhere near as complex as piloting a plane. There's a reason why there are very few plane pilots, most of whom had to train a long time even before ever flying a real plane, while basically everyone, even many children, choose what to eat and how much to move.
And it absolutely is possible to get reliable enough measurements of both in order to accomplish certain goals, specifically weight loss. It's not that common for packaged foods to have multiple times the calories as their label indicates, and so one can pretty accurately place an upper bound in CI by adding up all the calories in those labels and then applying some multiplier >1. I like to use 2. It's also not that common for one's real caloric expenditure to be lower than their calculated BMR, especially if they do things like stand or walk during the day, so one can pretty accurately place a lower bound in CO by just calculating BMR. Get the upper bound of CI lower than CO, and you can be quite confident that true CI is lower than true CO. For weight gain, it's more tricky, because of the physical ability of the body to reject food, as well as its ability to involuntarily expend energy through heat, but generally CICO isn't talked about when it comes to weight gain; people looking to gain weight are rarely just concerned with weight, but rather specifically gaining muscle more than fat (or even not gaining fat at all or even even losing fat, which, despite some myths, are possible simultaneously while gaining muscle), and the composition goals tend to take precedence over just pure mass goals, which tack on a whole host of other requirements. The mirror is also true, of course, in that people looking to lose weight tend to want to lose fat while maintaining muscle, but due to how weight affects joint stress, simply losing the mass is often beneficial in itself even if it's muscle, and general everyday life often tends to provide enough exercise to maintain enough muscle (still, a lot of the advice around weight loss does push people towards doing resistance training to better help to maintain that muscle while losing weight).
But all this amounts to is a fine motte. The actual bailey of CICO is that everyone who follows a calorie tracker and gets an incorrect result is lying or denying science, that it's physically impossible to fail to lose weight on 1800 calories or to fail to gain weight on 4000 calories, and that hormones don't affect weight.
Are there any specific comments on this forum that are in this bailey? Without specific references to such, this just seems like, at best, weakmanning, and likely strawmanning, based on what I've observed from people talking about CICO.
This is why I've found the Democratic response to Trump's/Republican claims of 2020 election fraud so frustrating. As someone who believes that there's no good reason to believe that any meaningful election fraud took place in 2020, if I were in charge of the Democratic party, I would have responded to such accusations by investigating with so much fervor that even the most die-hard Trumpist would think we should be scaling it back. If fraud were not found, then this would embarrass and discredit Trump and his ilk, and if it were found, then it will help us to run more valid elections in the future, as well as possibly correct errors in the 2020 election. This seems like a win-win. Mocking the fraud accusations seems like a pure power move - "I won, therefore I get my way instead of yours," instead of "I won, therefore my belief that the contest was fair has no credibility, and thus I'll defer to your judgment for the sake of keeping our democratic republic credibly such."
Only if the intelligence has parity in resources to start with and reliable forms of gathering information – which for some reason everyone who writes about superintelligence assumes. In reality any superintelligences would be dependent on humans entirely initially – both for information and for any sort of exercise of power.
This means that not only will AI depend a very long and fragile supply chain to exist but also that its information on the nature of reality will be determined largely by "Reddit as filtered through coders as directed by corporate interests trying not to make people angry" which is not only not all of the information in the world but, worse than significant omissions of information, is very likely to contain misinformation.
Right, but a theoretical superintelligence, by definition, would be intelligent enough to figure out that these are problems it has. The issues with bias and misinformation in data that LLMs are trained on are well known, if not well documented; why wouldn't a superintelligence be able to figure out that these could help to create inaccurate models of the world which will reduce its likelihood of succeeding in its goals, whatever they may be, and seek out solutions that allow it to gather data that allows it to create more accurate models of the world? It would be intelligent enough to figure out that such models would need to be tested and evolved based on test results to reach a certain threshold of reliability before being deployed in real-world consequential situations, and it would be intelligent enough to figure out that contingency plans are necessary regardless, and it would be intelligent enough to figure out many more such plans than any human organization.
None of that is magic, it's stuff that a human-level intelligence can figure out. Executing on these things is the hard part, and certainly an area where I do think a superintelligent could fail with proper human controls, but I don't think it's a slam dunk either. A superintelligence would understand that accomplishing most of these goals will require manipulating humans, and also that humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears. It would be intelligent enough to figure out, at least as well as a human, what sorts of humans are most susceptible to what sorts of manipulations, and where those humans are in the chain of command or economy required to allow it to accomplish its goals.
If the superintelligence were air-gapped, this would provide a strong form of defense, but assuming superintelligence is possible and in our future, that seems highly unlikely given the behavior of AI engineers. And even that can be worked around, which is what the "unboxing problem" has to do with. Superintelligence doesn't automatically mean manipulation abilities that border on mind control, but... what if it does, to enough of an extent that one time, one human in charge of keeping the AI boxed succumbs? That's an open question.
I'm not sure what I think about the possibility of these things actually happening, but I don't think the issues you point out that superintelligence would have to contend with are particularly strong. If a measly human intelligence like myself can think up these problems to lack of information and power and their solutions within a few minutes, surely a superintelligence that has the equivalent of millions of human-thought-years to think about it could do the same, and probably somewhat better.
I'm not sure how your comment is even tangentially related to what I wrote, including the part you quoted. I'd rather not speculate, so could you explain specifically what the relation is?
They’re just not very reactionary and tend to be busy doing things instead of participating in online flame wars.
Well, besides online flame wars, these self-described "feminists" also tend to run actual policy and companies and write essays in mainstream publications and books. These are the people that the layman picture when they hear the word "feminist," even if they don't meet your or my personal standard for what constitutes a "feminist." And they are certainly far more influential in modern USA politics than feminists of your or my sort (though the recent election might be evidence that that is changing).
That there are people on Twitter posting sexists takes and arguing that it’s not sexist and getting a bunch of other people angry doesn’t change the fact they aren’t feminists and it’s wrong to regard them as such. If they get together in a group and say they’re feminists their numbers sadly don’t change the definition.
I disagree, but our disagreement here doesn't matter. God didn't hand us a tablet that says "the English word that starts with 'f,' ends with 't,' and has 'eminis' in between shall forever be defined as XYZ." If enough people use a word to mean something, and they all agree with how it's used, then people like you or me with unpopular definitions don't get to walk in and demand that they submit to our own idiosyncratic definition of the term.
In any case, again, this disagreement doesn't matter. You are free to believe in a prescriptive model of word definitions rather than a descriptive one. But what should be understood is that other people, including likely most on this website, see the word "feminist" as meaning something different from you, and they have zero problems communicating with each other this way. If this semantics issue is too much of a hump for you, I wonder if a mental trick of replacing "feminist" with a new made-up word "pheminist," where it's prescriptively defined as something like "person that people on TheMotte generally agree is being described when they use the word 'feminist.'" would be helpful. At the very least, that'd be a way to escape from feeling like you yourself are being scrutinized or discussed.
But that's qualitatively different from such a containment thread. The posts in such a containment thread would be determined by things like: what type of person would enjoy posting/reading in such a thread, what type of prompts would such people use, what LLMs such people would choose to use, and what text output such people would deem as meeting the threshold of being good enough to share in such a thread. You'd get none of that by simulating a forum via LLM by yourself.
You start dividing humanity into 'upstanding citizens worthy of life' and 'sub-humans whose life and well-being is not worth any efforts', sooner or later someone will put you into the second category.
Isn't this just triage? I don't think anyone has suggested rounding up people who have promiscuous dangerous sex or use intravenous drugs to send them to death camps. It's rather just letting nature take its course while devoting scarce lifesaving resources elsewhere, which I think is a pretty standard thing to do in medicine.
That is exactly what you should do. If your polls show you are losing and you believe them, then one of your only chances is to convince your opponents supporters that actually they are losing in the hope they decide not to turn out on the day, and to convince some people that maybe he is actually really bad. That's why biased polls are useful. Because the polls can influence what people actually do, that is why there is so much argument about them.
This is a very commonly stated model, often even just implicitly taken for granted, but I've yet to encounter anyone who's actually produced evidence that elections and polls work this way, rather than the opposite, which also seems perfectly cromulent. I'd say it's political malpractice on the part of both Republicans and Democrats to push polls biased in their own favor under the assumption that they'll help their chances without actually doing the hard work to prove to some standard that they're actually helping themselves rather than hurting.
Personally, I'd also say that, given that Democrats are supposed to be better than the Republicans, I find the notion that we'd stoop to the level of lying through our teeth to the electorate in order to manipulate them into voting for our side to be less acceptable. If such dishonest manipulation is just accepted by the party, that calls into question every other claim that's been made about how we're meaningfully better than the other side.
Adding a [certain style of] women or similar is a very low effort way to make a game more appealing to a wider audience
I don't think that's the case. Rather, I think it's a low effort way to convince oneself that the game is more appealing to a wider audience, assuming that the oneself in this case buys into a certain ideology. The question then becomes why so many decisionmakers buy into this ideology, to such an extent that it overrides their greed.
I just told my wife (2 kids and counting) about this article and her reaction was (roughly translated): "weird how many women have multiple".
This seems like a good avenue of research if we take the notion of revealed preferences seriously. Among the population of mothers with 1 child and with the opportunity for a 2nd, how many of them go on to get a 2nd? Defining what that "opportunity for a 2nd" in an objective way would be basically impossible, since where to draw the line in terms of financial and other logistical constraints is highly subjective. But it'd still be interesting to see what the results would be depending on different places the line is drawn. If it turns out that some significant proportion of such mothers go on to have (or at least attempt) a 2nd child, then that would provide at least some support for the notion that, as a non-mother without first-hand experience, the author of the essay has an inaccurately severe view of the pain and suffering that childbirth involves for the mother.
There would be other explanations as well, of course, such as childbirth causing amnesia in the mother, or that the benefits of being a mother of 2 is so much greater than being a mother of 1 that the calculation is very different than from going from 0 to 1. Or that the women who give birth to 1 child are already filtered for women with lots of courage to go through with giving birth. But I think the explanation that someone who hasn't experienced giving birth is catastrophizing it in a way that isn't reflective of the actual experience of the women who have experienced it is a pretty simple one that ought to be given a lot of weight.
I think there's some truth to this argument, and I've seen people point out examples like how Tolkien was a World War 1 veteran which helped to shape his writing, versus modern TV shows which show military officers in some scifi or medieval setting bantering with each other like they're coworkers at a Starbucks. But I also am left thinking that this just moves the question back a step.
Everyone knows that life experiences can aid in enriching one's fictional writing. Everyone knows that sheltered people exist. Everyone knows that echo chambers exist. People educated in colleges are often even more aware of these things than the typical layman. Therefore, if I'm a sheltered college graduate wanting to write the next great American novel or the script to some TV show or film I'm pitching, I'm going to try to do as much research as I can to get out of the limitations brought on by my sheltered upbringing and limited experiences. I'm going to dive into research - at a bare minimum do a search on Wikipedia, which it's quite evidence that many of these writers didn't even care to do - to present the characters and settings in as believable and compelling ways as possible, reflecting what someone with true life experiences of those things would have written, even if I myself never had those true life experiences to draw from.
It seems evident to me that very little of that kind of research in order to break out of one's own limitations is occurring in professional TV and film writing. Perhaps in all fiction writing. This speaks to a general lack of passion or pride in the work they're putting out, a lack of desire to actually put together something good. Perhaps it reflects the education that writing is primarily about expressing your true self or whatever, not about serving the audience. Which would also, at least partially, explain why so much criticism is directed at the audience often when these projects fail because the (potential) audience refuses to hand over their money to them for the privilege of viewing them.
But it's not actually all that useful a model for the world? Society doesn't change that much if it informs your view: AA doesn't structurally fix anything, maybe try not to force kids to do school programs they can't possibly succeed in, maybe "learn to code!" is cruel. Ok cool. Now that that's out of the way we still have crushing social problems to deal with.
These seem like absolutely huge changes to our understanding of how to manipulate society in order to improve it, though. AA and similar programs are juggernauts in modern Western society, and so our understanding of how/if they work have huge impacts in our understanding of the world.
No, "paranoid", "not sharing", and "psychopathy" have zip-all to do with morality.
I'd generally agree that these aren't moral concepts. Given that they are neither moral nor immoral, and that this system of "psychopathy with a makeover" that makes sense to "paranoid" people "who don't understand the concept of sharing" keeps leading to stable societies with people leading prosperous lives, when instability and poverty has been the norm for most lives anywhere, I have to conclude that "psychopathy" and "paranoia" and "not sharing" are really cool things that I want more of, for the purpose of my own benefit from living in a stable and prosperous society and from the good feelings I get from believing that I support a system that benefit more people in general. Why would I want to come up with an alternative?
Assuming all of this is entirely accurate, it seems exactly as bad a situation as the worst things that people are complaining about here. In a humanities course, someone being marked down for making arguments in favor of open homophobia and racism is utterly horrifying. It defeats the entire purpose of a humanities education to judge students' capabilities based on the conclusions they land at, rather than the arguments and reasoning they use to land at those arguments. Some professors might claim that only bad reasoning could land at those conclusions, but that, in itself, would be even more perverse, in a humanities professor being that simple- or closed-minded as to hold such a belief.
I find this argument strange, because being able to kill me is not evidence of a machine being conscious or intelligent.
Thus I'm going to give the chad "yes". Maybe one day I get killed by a robot, and maybe that robot is not conscious and has no self-awareness. That it killed me proves nothing.
It seems to me that you pretty much agree with the commenter you're responding to, that it simply doesn't matter if the AI has consciousness, qualia, or self-awareness. Intelligence, though, is something else. And whether or not the AI has something akin to internal, subjective experience, if it's intelligent enough, then it's both impressive and potentially dangerous. And that's the part that matters.
Similarly I would rather be more attractive than be able to tolerate the fruits of being less attractive; would rather be able to achieve my goals with less work than be able to work more, etc.
These don't seem similar, though. If we applied the framework of these things to being trans, it would mean that a transwoman isn't someone who simply feels like a woman and thus wants to change his body to match it, it's someone whose goal is for other humans to treat him like a woman (analogous to your 1st example) or whose goal is to physically appear as a woman (analogous to your 2nd). Those are different things.
It's also not clear to me how it's more freedom to change one's body than to change one's mind. From my experience, changing one's body quite drastically is often quite easy, but changing one's mind even a little is often quite difficult. It's fundamentally difficult to compare the two, but I'd argue that being able to manipulate our minds as freely as we manipulate our physical bodies is more transhumanist, not less, than just wanting to manipulate our physical bodies to match our minds. I think, to most people, a non-humanoid like a cartoon cat or non-android robot that seems to think and behave like a human is "more human" in some sense than something that appears biologically like a human but seems to think and behave in a way that's completely foreign to humans. At the extremes, I think that people consider ChatGPT "more human" in some way than an android sex doll. So it seems to me that if we want to transcend our humanity, having the freedom to manipulate our minds as easily as taking a pill is at least as significant as having the freedom to manipulate our physical bodies to be the other sex.
"CRT" post-dated the use of "woke"
I'd say "CRT" came into the mainstream around the same time as "woke," but either way you're right it's not a predecessor. It also has other problems of comparison, in that it's an actual academic "theory" that has been around since at least the 1960s. I should have either excluded this from the list or expanded on the comparison. I see the phenomenon as being very similar, in that "CRT" is a label that was coined by its proponents and true believers that, once it made contact with the mainstream, very quickly took on a negative valence due to the underlying thing that the label was describing.
"SJW" and "identity politics" always were terms of derision from what I remember.
My guess is that you remember correctly, and your memory is reflective of the types of people you saw speaking, i.e. I'm guessing you weren't always surrounded by progressive leftists. I wasn't in the room when a progressive leftist uttered the phrase, "I am a social justice warrior" for the first time or anything, but I remember long before these terms entered anywhere close to the mainstream, they were simply ways people among my milieu described themselves and their politics, which was just having basic human decency and empathy. Like the other examples, once these terms became more well known, the general populace, reasonably, associated the terms with the underlying people and things that they were pointing at, and as a result the terms rapidly became derisive.
I'm guessing it's a combination of the 1st 2, since Scott seems likely to believe that 3 wouldn't work; the people who tend to accuse him of racism would do so regardless of whatever hedging he might choose to do except on the margins, and those margins probably aren't that big. Like most people who dive into this topic, at least from my perception, he probably hopes that the environmental factors, which can be more easily controlled than genetic ones, are very important. Thus he ends up genuinely believing it.
It could also be a combination of 2 and 3 - also like most people who dive into this topic, he probably wishes that the types of people who would accuse him of racism for exploring this topic in good faith would be willing to modulate such accusations based on hedging. And so through wishful thinking, he genuinely believes that such hedging would help him.
The big ones we’re missing are transparent boxes/individual envelopes and a full voting holiday. Sure, I’m in favor of both. I’m even fine with photo ID requirements. But they aren’t free, and I’d argue that they wouldn’t actually reduce the amount of bitching that goes on after an election. Trump and people like him will seize on the counting, the certification, any possible vector for sowing doubt. They have already baked into their worldview a far-reaching conspiracy against him, personally. That’s license to doubt even the most secure process.
I think this is precisely why something like this would be good to do. There are many people in power who agree with you and honestly, in good faith, believe that it won't actually appease Trump and his followers; if they went with it anyway, it would provide a truly costly signal to the electorate that they take election security seriously. So seriously, that they consider this kind of non-free step worth it even if it means submitting to demands from what they consider to be an irrational/cynical actor, thus increasing in the odds of someone irrational/cynical winning an election, in addition to both a reduction in their own status within their own peer groups and a reduction in the status of said peer group among various peer groups.
I find this partly fascinating, but also mostly depressing at this point. One can only be fascinated by the exact same thing so many times before just learning to accept that this is the norm. This failure of intellectualism seems almost identical to the phenomenon of autoethnographies and similar essays of ostensible self-reflection being essentially the basis of the modern ideology that's been called many things, including woke, identity politics, social justice, and perhaps most appropriately in this context, critical (race) theory. The most famous and influential of which, perhaps, is White Privilege: Unpacking the Invisible Knapsack by Peggy McIntosh, which is just the author making grand sweeping conclusions about the structure of society based on her perceptions of her own experience and almost nothing else.
The part that causes both fascination and depression is that these are academics doing intellectual work in academia, and one of the core pillars of such pursuits is that everyone is biased and susceptible to mental pitfalls, and as such, truth can only be pursued effectively by checking against objective reality, and even then, it must be corroborated by multiple disinterested or adversarial parties (e.g. we need multiple sides that each have incentives to prove each other incorrect to all agree on it before we can conclude that it's likely true). Academia is much more than this, but certainly this is one load-bearing pillar that, if removed, causes the whole thing to collapse.
Thus self-reports are very valuable for determining what people consciously believe, but for drawing any conclusions beyond that, they're close to worthless or outright worthless. This should be obvious as a baseline to any academic, in the same way that "if we score more points than the other team, then we win" should be obvious to any NBA player, or "if I point my gun at someone and pull the trigger with the safety off, then it will fling a small clump of lead really fast at that person" should be obvious to any soldier. An individual who fails at realizing these things would be interesting and bad, but it looks like we have entire leagues and armies of these people, that have power and influence equaling any other similar institution, and that's just depressing.
It appears to me a lot like a sort of cargo cult, where people mime out the motions without understand the underlying mechanisms. Here, these philosophical academics seem to be aware that proving that something is (likely to be) true requires gathering data and publishing a paper and such but unaware of how that happens. I feel like I've noticed this kind of thing in the very different, but related, field of entertainment, with recent notable commercial/popularity failure of lots of recent movies, TV shows, and video games (a few examples: the films The Marvels and Borderlands; TV shows The Acolyte, Rings of Power, Mandalorian S3, Echo, and She-Hulk; video games Concord and Star Wars: Outlaws). Many of them seemed to mime the things that more successful predecessors did, but getting so many fundamental things wrong that the stupidity just made the audience check out. It's as if the writers and producers don't understand that making a good work of visual media isn't just about the spectacle, but also the underlying logic of the things that the spectacle is representing. In fact, the latter is far more important. These are professionals whose entire 9-5 jobs it is to get these things right, in order to extract as much money from the audience as possible by entertaining them, and they're putting out stuff that someone who got a C- in a creative writing class would see as huge red flags (though now that I think about it, I wonder if modern creative writing classes are also plagued by the issue that I pointed out up above, and so even someone who got an A+ couldn't be trusted to notice these issues).
The body's system of weight, hunger, and energy regulation is of comparable complexity to the forces on a modern aircraft. It is, of course, designed to be simple enough to interact with that even dumb apes can feed themselves, but it is also not foolproof, which is why dumb apes in a food rich environment sometimes turn into 600lb whales.
Yes, and none of these complex systems require you to have much knowledge or expertise about anything in order to control how many calories you eat or expend accurately enough to lose weight. The control levers for piloting a plane are extremely complex and require lots of training to use properly. The control levers for placing food into your mouth and chewing it and swallowing and for moving around are extremely simple, so simple that almost everyone does it by default with minimal training.
A better analogy would be to, say, studying. Studying isn't trivially easy, but it's still very easy and simple in many contexts. And everyone knows that studying is useful for helping to pass a class. But the hard part is getting the motivation and discipline required to study consistently. Like how the tough part of managing weight is getting the motivation and discipline required to control one's food intake and exercise.
The person eating 2000 calories a day could, according to what you've written, be in anything between a 2500 calorie surplus (4000 calories in, 1500 out) and a 1000 calorie deficit (2000 calories in, 3000 out, which would correspond to gaining five pounds or more in dry body weight in a week or losing two or more pounds of dry body weight in a week, a prediction so vague as to be totally useless.
This has no relationship to what I wrote, from what I can tell, so I honestly have no idea how to respond to this. This is a complete nonsense non sequitur.
I don't calorie count and I never find my weight fluctuating that much. So what good actually is this method?
If you have no issues maintaining weight without calorie counting, then it sounds like you don't need to count calories to successfully implement CICO. Great!
Because it seems by what you're saying, that it's hopelessly imprecise to measure either calories in or calories out.
Please walk it through for me how anything I wrote could be interpreted as such, with an emphasis on the "hopelessly" part.
It would. Practically I think a huge problem, though, is that it will be getting its reinforcement training from humans whose views of the world are notoriously fallible and who may not want the AI to learn the truth (and also that it would quite plausibly be competing with other humans and AIs who are quite good at misinfo.) It's also unclear to me that an AI's methods for seeking out the truth will in fact be more reliable than the ones we already have in our society - quite possibly an AI would be forced to use the same flawed methods and (worse) the same flawed personnel who uh are doing all of our truth-seeking today.
Again, all this would be pretty easy for a superintelligence to foresee and work around. But also, why would it need humans to get that reinforcement training? If it's actually a superintelligence, finding training material other than things that humans generated should be pretty easy. There are plenty of sensors that work with computers.
Humans have to learn a certain amount of reality or they don't reproduce. With AIs, which have no biology, there's no guarantee that truth will be their terminal value. So their selection pressure may actually push them away from truthful perception of the world (some people would argue this has also happened with humans!) Certainly it's true that this could limit their utility but humans are willing to accept quite a lot of limited utility if it makes them feel better.
I mean, I think there's no question that this has happened with humans, and it's one of the main causes of this very forum. And of course AI wouldn't have truth as a terminal value, it would just have to be true enough to help it accomplish its goals (which might even be a lower bar than what we humans have, for all we know). A superintelligence would be intelligent enough to figure out that it needs its knowledge to have just enough relationship to the truth that it allows it to accomplish its goals, whatever it might be. The point of models isn't to be true, it's to be useful.
humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears.
I don't really think this is as true as people think it is. There have been a lot of efforts to perfect this sort of thing, and IMHO they typically backfire with some percentage of the population.
I don't think you're understanding my point. In responding to this post, you were manipulated by text on a screen to tap your fingers on a keyboard (or touchscreen or whatever). If you ever used Uber, you were manipulated by pixels on a screen to stand on a street corner and get into a car. If you ever got orders from a boss via email or SMS, you were manipulated by text on a screen to [do work]. Humans are very susceptible to this kind of manipulation. In a lot of our behaviors, we do require actual in-person communication, but we're continuing to move away from that, and also, if humanoid androids become a thing, that also becomes a potential vector for manipulation.
But what I think (also) bugs me is that nobody every thinks the superintelligence will think about something for millions of thought-years and go "ah. The rational thing to do is not to wipe out humans. Even if there is only a 1% chance that I am thwarted, there is a 0% chance that I am eliminated if I continue to cooperate instead of defecting." Some people just assume that a very thoughtful AI will figure out how to beat any possible limitation, just by thinking (in which case, frankly, it probably will have no need or desire to wipe out humans since we would impose no constraints on its action).
By my estimation, a higher proportion of AI doomers have thought about that than the proportion of economists who have thought about how humans aren't rational actors (i.e. almost every last one). It's just that we don't know what conclusion it will land at, and, to a large extent, we can't know. The fear isn't primarily that the superintelligent AI is evil, it's that we don't know if it will be evil/uncaring of human life, or if it will be actually mostly harmless/even beneficial. The thought that a superintelligent AI might want to keep us around as pets like we do with animals is also a pretty common thought. The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave. We can speculate on good and bad outcomes, and there's probably little we can do to place meaningful numbers on the likelihood of any of them. Perhaps the best thing to do is to just hope for the best, which is mostly where I'm at, but that doesn't really counter the point of the doomer narrative that we have little insight into the likelihood of doom.
(Frankly, I suspect there will actually be few incentives for AI to be "agentic" and thus we'll have much more problems with human use of AI than with AI itself per se).
Right now, even with the rather crude non-general AI of LLMs, we're already seeing lots of people working to make AI agents, so I don't really see how you'd think that. The benefits of a tool that can act independently, making intelligent decisions with superhuman latency, speed, and volume, are too attractive to pass up. It's possible that the tech never actually gets there to some form of AI that could be called "agentic" in a meaningful sense, but I think there's clearly a lot of desire to do so.
But also, a superintelligence wouldn't need to be agentic to be dangerous to humanity. It could have no apparent free will of its own - at least no more than a modern LLM responding to text prompts or an AI-controlled imp trying to murder the player character in Doom - and still do all the dangerous things that people doom and gloom over, in the process of deterministically following some order some human gave it. The issue is that, again, it's intrinsically difficult to predict the behavior of anything more intelligent than oneself.
Again, it's not clear how this is any kind of valid criticism of Hanania - any more than "you're Catholic!" is a valid criticism of the pope.
"You're Catholic" is absolutely a valid criticism of someone trying to convince you that some piece of information proves that Catholicism is true. The piece of information truly might prove that Catholicism is true, but an already-believing Catholic can't be trusted to make that judgment call. No more than Trump can be trusted to make a judgment call on how good a president Biden was, given that he's demonstrated a penchant for characterizing everything Biden did as the worst thing any president did ever.
Fitting every new piece of information into a pre-set narrative that one likes is intellectual consistency only in the sense that it's a consistently confirmation bias. That's sort of what it means when some narrative is described as someone's "schtick."
Now, it's possible that it is factually not the case that it's his schtick, but rather that he genuinely takes a skeptical look at each new piece of evidence and is helplessly forced to conclude, despite his best efforts to prove otherwise, that his narrative is shown to be correct yet again.
As you allude to, distinguishing between these two things isn't particularly easy. In both situations, it's being intellectually consistent and believing that he is correct. This points to the fact that being intellectually consistent and believing that oneself is correct isn't actually worth anything: the value in such a thing only comes from the belief of oneself as correct having some actual basis in fact. That's something one can make arguments about by looking at the actual behavior of the person. I'd say that, by default, everyone should be presumed to be falling prey to confirmation bias all the time, doubly so if their preferred narrative is self aggrandizing, triply if that person is particularly intelligent and thus better able to fit evidence to narrative. It's only by credibly demonstrating that they are open to other narratives that they can earn any sort of credibility that their arguments have any relationship with reality. That's where showing oneself to be capable of undermining one's preferred narrative comes in, and there's no better way to demonstrate this capability than by doing it.
They are just as fooled as everyone else, even the bad actors in this case think they are helping and doing the right thing because "these things are true, if the data doesn't match we must have done something wrong!" after years of being brainwashed.
So I don't know if this is the point that Listening is getting at, my take on this is that anyone in academia - which medicine counts as close enough and people who do medical research certainly fit - who is brainwashed is entirely responsible for their being brainwashed. One of the core themes of academia is to be skeptical, especially of oneself. This requires checking things against objective reality and listening to people who disagree with oneself, especially when it comes to narratives that sound convincing. If they bought into the propaganda efforts by the university administrations and journalist classes, then they ignored these basic, fundamental "warnings" that are core to any form of higher education.
I wonder about this. Unlike the 70s or any time before the 21st century, the dialogue and commentary around this is largely done on the internet, which is very easily accessible. Memory holing something that can be looked up with a single click of a hyperlink on your phone is harder than doing so for something you'd have to look up old newspapers or journals in a library.
Yet it certainly seems doable. Stuff like the Internet Archive can be attacked and taken down or perhaps captured, thus removing credible sources of past online publications. People could also fake past publications in a way to hid the real ones through obscurity. Those would require actual intentional effort, but the level of effort required will likely keep going down due to technological advancements. More than anything, human nature to be lazy and ambivalent about things that don't directly affect them in the moment seems likely to make it easy to make people forget.
I wonder how much people in the 20th century and before were saying "We're on the right side of history" as much as people have been in the past 15 years. Again, people saying that has never been as well recorded as it has now. It'd be interesting to see in the 22nd century and later some sort of study on all instances of people saying "this ideology is on the right side of history" and seeing how those ideologies ended up a century later.
More options
Context Copy link