MathWizard
Good things are good
No bio...
User ID: 164
That's a good point, and what he usually does for work (which has free meals for employees, but is usually not gluten free). Though for a dinner it does decrease the utility for them to go in the first place. That is, some of the value of going is socializing, and some of the value is eating food, and so it's just more inconvenient for himself.
From what I've witnessed, the issue has never come up on his side. That is, he mostly only attends dinners with relatives, all of whom know about his condition and automatically volunteer to accommodate it without him having ever explicitly asked. Usually at like thanksgiving or something, one of our aunts will make an extra dessert for him that's gluten free, and then he can eat like half the food anyway because lots of foods don't have gluten and there's enough variety that he can pick and choose what he knows he can eat. So I don't actually know how it would play out if they were less enthusiastic about being kind and accommodating and/or had smaller meals with fewer people and less variety (it helps that pretty much no one else in the family has food issues, except one cousin who also has a gluten issue). But my suspicion is that his default would be to just not attend. If he's making his own food to eat all by himself anyway and not sharing with anyone, it'd be more convenient to just cook and eat it at home. But it also depends on how social you are.
My brother has a gluten issue (I don't know if it counts as a "sensitivity", but the doctors aren't 100% sure it's celiac, but is extremely similar so the distinction is moot), which causes him to be lethargic, messes with his digestive system and causes him to break out in red sores all over his body. The latter of which can last for weeks after having eaten nontrivial amounts of gluten in a single meal, and is the most obvious and provable evidence that the condition is real.
It's not like it's going to kill him (though the doctors say that it may increase the risk of colon cancer), but neither will drinking a small amount of bleach. It's basically literal poison for his body.
I understand that it's inconvenient to plan around. And it's especially annoying given that hypochondriacs and trenders and exaggerators exist and are difficult to distinguish from people with real biological conditions. But from the perspective of someone who actually has one of those conditions, if the choice is to be inconvenient to you, or eat literal poison in your food, they'll be inconvenient. And if you don't like it you don't have to invite them for dinner.
Although I agree that there is some ideological corruption in the courts, especially where the culture war is involved, they are less than 100% corrupt. Certainly less than the legislators. Which means that the actual intent of the law sends a nontrivial signal, and if there is money and advocates on both sides of the issue the actual intent can serve as a tiebreaker even if the other side has slightly more money.
At least in America, copyright laws are (ostensibly) primarily for the purpose of promoting and incentivizing the creation of original works. Copying someone else's work is bad not because it increases the availability of that work (which is a good thing) but because it decreases the rightful monetary gain of the original creator since people buying or pirating the copied work are not paying them when they ought to. As a result, fewer artists are financially incentivized to make things, and thus fewer original works are created (which is bad).
As such, there is no legitimate interest in banning AI art or its access to training data, provided the AI is creating new original works and not blatant ripoffs. Yes, the AI art may compete with human artists and thus indirectly reduce financial incentives for them to make art, but it does this indirectly by creating new original art that competes with theirs in the marketplace, same as any human artists who competes with them. As a result, more original art is created, and thus the copyright laws as intended should allow it to exist.
If I had some god-given certainty that any population with legal access to birth control would, independently of any soft pressure or incentives other than the force of the law, end up with fertility below replacement, then I would begrudgingly accept legal controls on it to prevent the extinction of the human race.
With anything less than said absolute certainty, I would attempt to explore a number of softer options. You could provide tax incentives and/or literally pay people to have children. You could attempt to increase the social status of good parents and shame childless people. You could attempt to advance technology to create artificial wombs and have the state make and raise babies (not at all an ideal outcome, but better than extinction or forcing people to breed against their will). You could explore the replacement rates of different subpopulations and attempt to preserve and promote cultures with higher fecundity. Maybe all the liberal white atheists voluntarily go extinct as their population exponentially declines, and they get replaced by immigrants and Amish people who keep having babies. I suppose a religion which forces people to avoid birth control taking over the population is comparable to just directly outlawing birth control, but not the same because people can leave. Maybe we end up in a long term equilibrium where 1/5 of the population are strongly religious with a reproductive rate of 3, and 4/5 of the population are atheists with a reproductive rate of 1/2, so the total population remains constant (1 religious person and 4 atheists have 3 and 2 kids in each group respectively), and some fraction of the religious children leave the faith every generation such that the sizes of each group remain constant.
There are a lot of possibilities that would mitigate the effects. Extinction of specific subgroups and cultures via demographic replacement is a valid and realistic concern for people who care about those subgroups and cultures. But I don't think extinction of the entire human species by perpetually lowered birthrates is a realistic threat unless some sort of chemical pollution actually destroys biological fecundity such that even people who want kids can't have them.
I think so. It's been a while since I learned about this so I don't remember all the details or studies off the top of my head. But I'm pretty sure there were many such studies and probably at least some controlled correctly. I'm not completely certain though.
However I don't think it would even be appropriate to control for money/wealth/family-income directly, because part of the value of a two-parent household is the increased income. And even if you look at income per parent that's not necessarily appropriate because being a single parent forces them to juggle career and child rearing which would lead to less opportunities to take on high paying but demanding jobs. You'd have to control for socio-economic status of the families the parents came from (ie the grandchildren of the kids) or something complicated like that which controls for potential earning power rather than actual earnings.
People have had sex for fun throughout all of human history. Even in times with serious social stigma for it, people did it in secret anyway. Even the Bible is absolutely riddled with people having sex they're not supposed to. The cat was never in the bag: people have always and will always want to have lots of sex. It has gotten worse in recent years, but it has always been there.
The most realistic path forward that I see is advances in technology making better, easier, safer forms of birth control that don't have the flaws of current ones. Something like an IUD but less invasive and easier to just give to everyone and then not remove until they get married. Or some fancy injection you can regularly give people like a flu shot sterilizes them for a year before it wears off (with reliable predictable timing so nobody ends up permanently sterilized or having kids if it wears off too soon). At the very least, some sort of significant birth control pill or IUD-like-thing for men so that both people can independently control their reproduction status and not be vulnerable to the other one lying.
But in the meantime, we have to work with the technology that exists. And while I do agree that it does contribute to promiscuity, I think that the effect there is secondary and minor while the effect on reducing pregnancies is direct and significant such that the net effect at saving unborn lives is definitely positive.
I personally have no interest in banning contraceptives because, again, who cares.
I want literally the opposite, largely because I am pro-life. I am tentatively in favor of forcing unmarried people to use contraceptives, except that there's no reasonable to enforce it without authoritarian government control that I'm not in favor of. At the very least, we should bring back all of the shame and stigma that used to be attached to unmarried sex a couple centuries ago, but only apply it to people who don't use birth control. Also make it free to incentivize people to use it.
First and foremost, this will reduce abortions. The argument against outlawing it is that people will just do it anyway but in unsafe ways. If so, the only way to truly prevent abortions is to prevent unwanted pregnancies, so we should be pushing legal and social pressures towards doing so.
Second, I believe it is immoral to bring an unwanted child into existence. They will not have the love and support from their parents that a child deserves. Again, pro-choice people use this as an argument in favor of abortions, but I think having an unwanted child is less evil than killing them (otherwise we could replace orphanages with euthanasia clinics). But it's still evil, and more birth control would also reduce this.
Thirdly, I believe it is immoral to deliberately have a child as a single parent, even if you want one. I feel less strongly about this, and I'm not sure I would go so far to call it "evil", just misguided and irresponsible. All of the science shows that children with two parents have significantly better life outcomes, I don't think one parent alone can fulfill all of the responsibilities of both paying for and actually educating and caring for a child, and doesn't have the full breadth of wisdom and life experiences to impart, since they only have their own perspective.
Unmarried people should not be conceiving children, because it inevitably leads to one of these scenarios (unless you have a shotgun wedding, which is still likely to lead to suboptimal results if your partner wasn't someone you were previously planning to marry). Therefore, unmarried heterosexual people should not engage in unprotected sex, at least in any form with a nonnegligble chance of conception. I'm not convinced it is the responsibility of the government to prevent this, I don't think it's within the range of powers they ought to have. But at the very least anyone who does this is a bad person and we need social pressure that disincentivizes people from doing it. Slut shaming is a lost cause, but I hope that unprotected-slut-shaming (Of both sexes. Men are equally culpable for their actions.) can make a comeback.
The first issue is the principle agent problem. "The rest of us" aren't in charge of hiring and firing them, their boss is. And for the most part their boss doesn't suffer negative consequences of having them around, because they do mildly useful administrative work which makes the boss' life slightly easier. Nowhere near enough to justify their full salary, but enough for the boss to justify a numerical increase on their budget spreadsheet.
In most businesses, this is partially, but not fully, countered by economic incentives. Shareholders in a company demand profit, and customers demand lower prices, and these can't be fulfilled simultaneously except by cutting costs. Companies with more slack: ones with natural monopolies or regulatory advantages or just temporarily on an upswing, are much more vulnerable to administrative bloat because there is less of this pressure. Look at all the cushy but low-productivity positions in the tech industry in the past few years, it's because they have so much slack that people can afford to waste someone else's money. The investors are unlikely complain or even notice that they only earned $1 billion this year when by all rights they should have earned $1.2 billion, while the difference between +$100 million versus -$100 million is going to cause heads to roll.
The situation in healthcare, at least in America, is broken in a slightly different way than just having lots of money though. It's this weird trifecta between the healthcare provider, the insurance companies, and the customers. Rather than customers shopping around for products they like, and declining or substituting a different good if costs are too high, demand is unpredictable and drastic. Bam, health problem happens, person goes to the nearest hospital, gets treated, and then gets a bill afterwards. Further, they don't even pay the bill themselves, they forward it to an insurance company who pays most of it and makes the customer pay some "deductible" which is usually not tied to how much the treatment actually costs. So the demand is highly inelastic. If healthcare prices rise the same number of people are going to have healthcare problems, and although some of them might choose not to get treated, that's a really bad outcome. And even if a customer manages to get to a cheaper more efficient healthcare provider, they're unlikely to see the benefits because of how insurance works.
All of this means that everything is much less tied to the normal economic incentives that keep prices low. Each hospital has something like a small local monopoly over their area and can raise prices and afford bloat with little consequence.
The second issue is that not all of the administrative work is bloat, from a local perspective. The weird adversarial relationship between insurance companies and healthcare providers necessitates a lot of administrative work that isn't productive on a global economic scale but is locally useful to their employer. If the provider hires someone who increases the success rate of convincing the insurance company to pay for treatment that already happens by a small amount, they might bring in an extra $200k a year, which justifies a salary of $100k a year and earns $100k profit. But if the insurance company hires a similar person who lowers the probability by the same amount, that justifies a salary of $100k for that person and when considered alone is a $100k profit for the insurance company. But these two people cancel each other out, and then net effect to the economy is that $200k extra is being used up on administrative salaries. Rinse and repeat until the marginal effect of such people decreases enough that the companies are no longer incentivized to hire more of them, and now there's millions of dollars going down the drain in a way that is locally rational for each company, but globally wasteful. It's a classic public goods dilemma.
Add in a bunch of nonsense legal regulations that exist for ostensibly good purposes but probably don't actually justify their costs, and you have even more demand for locally rational but globally wasteful administration.
The entire system is a mess and needs to be destroyed and replaced with... something. National healthcare fixes the second problem but not the first. Maybe that's good enough? The primary complaint about nationalizing anything is that it causes the first problem: prices are decoupled from economic incentives so nobody is incentivized to reduce bloat. We already have that problem in healthcare, and I don't nationalizing it would make it much worse, so I'm tentatively in favor, but if possible would prefer a privatized system that somehow fixed both problems (I have no idea how though).
Smart people can be cracked out murderers. It's less likely, but entirely possible. More realistically, smart people can have high paying jobs and obey the law and still do unethical things that cause harm to others in a less traceable way, but often with much greater effects due to their increased influence. Just because it's hard to create and enforce laws that measure their misdeeds and punish them appropriately (which is especially hard because most politicians are this kind of person) does not make them good or valuable people, even if some people treat them as if they are.
Of course if you choose the nicest most benevolent person in the smart people group and compare them to the worst person in the dumb people group then you'll conclude the smart people have more value. And even if you look at the nicest most benevolent person in each group and compare them you'll conclude the smart people have more value because they have more power and influence with which to do good things. But if you look at the worst and scummiest people in each group you'll conclude the opposite because the smart person has more power and influence with which to do evil things. No low IQ murderer will ever come close to doing as much evil as Hitler or Stalin, who were highly intelligent people. Intelligence gives more potential, but this potential can be used for good or evil, and it's the person's moral character and personality that determines what they'll use it for, and thus their net moral value.
I'm going to try to steelman some of FC's points. I don't necessarily fully agree with these, but I think they have some merit. First, most of your comment seems to be premised on the idea that the objection is to converting all. You keep repeating and extrapolating the phrase "abandoning the faith their forefathers" as if that, itself, is FC's core argument: that converting to a different faith is bad/traitorous. This is an inherently relativist perspective, trying to be fair and treat all belief structures equally. No Christians ever object to the notion of conversion in general, it is always a position that Christianity is actually true/good, and other religions are false, therefore conversion to Christianity is good and conversion away from it is bad. It's possible to make all sorts of objections to this position, but the fact that you argue from a relativist perspective suggests you (or maybe FC, or both) are missing the point.
Second, independently from whether Christianity is true/good in some objective sense, there's the additional issue you don't seem to notice which is a simple pragmatic alliance. Currently, Christianity is in the middle of being conquered by wokeism. These are the two major factions argument FC seems to be putting forth, or maybe a steelman of their position, is that Christianity, as the defender and the prominent force for thousands of years, is the most realistic faction capable of actually defeating wokeism. The criticism is not just that you didn't choose his prefered faction, but that, in the middle of a war between two major powers, you joined a minor third party with no hope of defeating either. If you want to defeat woke-ism, you need to ally with or preferably join the Red Tribe for real, not play third party half-ally half-enemy where you're fighting against both.
Personally, I'm less optimistic than these arguments would imply about how realistic it is for Christianity to make a comeback and defeat woke-ism without significant Blue-Tribe support. More realistically, I'm hopeful if we can defend for long enough then woke-ism will eventually collapse on itself and/or mutate into something less horrible and/or the Blue Tribe will come up with something less horrible which can outcompete woke-ism, which will then conquer and take over everything and be worse than Christianity but better than current woke-ism and our society won't collapse. But I do think that Christianity has a powerful defense against woke-ism that non-woke atheists lack, which is a strong mostly-objective morality system. We know what is right and what is wrong, and when progressives make moral arguments it's relatively easy for us to A. not be seduced by their arguments, and B. make strong defensive arguments against them. And while these arguments aren't necessarily convincing to non-Christians if they rely on biblical principles which are not shared by non-Christians, but sometimes they are. I don't think most atheists have the same level of moral conviction (a lot of Christians lack it too), which is why they keep ceding more and more ground to the leftists over time. A lot of people don't care that much about moral philosophy, but they don't want to be a bad person. If they don't already know what's right and wrong then they let someone else tell them what to do, the only question is whether it's the church or the diversity officers. And, despite all of its many flaws throughout the years, if they're not going to think for themselves then I'd rather have people listen to the church than an alternate source.
The solution is to not lower the status of people with lower IQ. It is possible, and quite likely, that we literally live in a world where black people, on average, have lower IQ than white people. If true, this means that, in reality, one of the following must be true: people with lower IQ have the same moral value as people with higher IQ, or black people have less moral value than white people.
No amount of obfuscation, linguistic gymnastics, or averting ones gaze can avoid this dilemma. You have to pick one of the two (technically there's a third option where unintelligent people have more value than intelligent people, but that's pretty niche). An awful lot of people firmly believe that people with low IQ are lesser, which forces them to either accept reality and become racists, or deny reality to avoid the logical conclusion of their beliefs. I would argue that the latter is just closeted racism because they believe unintelligent people are lesser, so all of the unintelligent black people who exist in reality are people they implicitly attribute as being lesser. The bullet to bite is that unintelligent people are not automatically bad people, and you're not better than them just because you're smarter than them. Once you do this, the entire structure of "racist truths" disintegrates, because you're not automatically assigning moral value or hatred to people just because of the way they were born.
The truth cannot be racist, because the truth does not assign moral value. People do that.
Are there any good centrist or right-wing think tanks in the U.S. that are respectable, principled, and might have interest in hiring a mathematical modeler that I could apply for jobs at?
For context, I have a PhD in math, I specialize in game theory and mathematical modeling. I also have some experience with disease modeling, though not Covid specifically. I am currently nearing the end of a postdoc research position at a University, and a few published papers and several drafts I'm still working on. Over time I've felt less comfortable in academia as everything shifts less and all diversity stuff keeps getting worse. It hasn't affected me directly much (though you can never tell when you don't hear back from a job application whether being a straight white male was the cause or not), but it's kind of uncomfortable, and some topics that I'm interested in I'm afraid to actually go into because the papers might be rendered unpublishable. And just in general I feel the papers I publish don't actually matter all that much, and I suspect something more applied like this might feel more meaningful.
I'm still applying for some jobs at universities, but also industry jobs and am wondering if maybe a right wing think tank would be a good opportunity. Given the left's capture of the Universities, this maybe implies there's a shortage of right-wing academics and I'd have a better shot of getting in? But that's probably less true in math. And I definitely don't want to work for some propaganda machine that just hacks studies together to conclude whatever they already believe is true. I think that the left sometimes has legitimate points that are worth considering, and ideally would like to be able to make good scientific papers and mathematical models that give insight to people on both the left and right, and can potentially make the right stronger and smarter.
Bonus points for places that allow working remotely or are near the East Coast so I don't have to move very far, but at this point I'll take what I can get.
Historical examples can be misleading for making predictions.
You know what's even more misleading for making predictions? A complete absence of historical examples. You're extrapolating way too much from "this thing is flawed" to "this thing is useless" without comparing it to the alternative. People are already really bad at critical thinking about events and politics, we don't need less we need more. Now, you can make a really good argument that history classes should be improved to deliver more value per time rather than making them longer (which I would be inclined to agree with). But the idea that they have no value to give is absurd.
One fix would be to use an unfiltered AI, or at least a discriminator that was trained on unfiltered AI.
More likely, the organizers would give the contest artists a list of rules, which would disqualify any art with "unacceptable" content which would be approximately the same as what the AI consider unacceptable. Though this may still lead to exploits like making something vaguely swastika shaped that humans won't find offensive but the AI will because it can't tell the difference.
Probably. But you could mitigate some of that by having an adversarial learning environment where an AI trains against the classifier and learns not to do some of the more obvious tells. And even if the detector is really good and humans can't truly fool it with a >50% belief, they can still compete with each other: a 20% beats a 16% beats a 8%, even if none truly fool the AI. And maybe for minimizing you would do really smooth but curving lines that AIs have trouble with (In either case, you might need to have some score for artistic beauty from human judges in addition to classifier score to prevent trivial things like a blank canvas or a couple boring straight lines.)
Not only do I want this, but I want to see art competitions spring up around such a model where artists attempt to make art (while being observed to prevent cheating) which either maximizes or minimizes the AI's belief that their art is AI generated. Having artists study and attempt to replicate AI art styles in order to fool the detector would be cool and a funny reversal of the current paradigm, and having artists learn techniques which are uniquely human would be cool too (and may be a thing they attempt to do even without such competitions in order to minimize suspicions laid on their art).
You make a good point that there are a wide range of possible fake, or at least questionable beliefs in a broad range of areas. But I don't think that invalidates my point that there are an absurdly large number of genuine beliefs about banal things. Any number of anecdotes does little to provide statistical weight when for every suspicious "My wife is the most beautiful person in the world" you cherry pick out there are literally hundreds of trivial beliefs like "My wrinkly grandpa is not the most beautiful person in the world", "my neighbor's dog is not the most beautiful creature in the world", "My wife's red scarf is more beautiful than her brown purse", "My wife's red scarf is more beautiful than mud"... that never get questioned and are rarely even mentioned because they're just so obvious to the person holding them and relatively uninteresting.
I'm not arguing that nongenuine beliefs don't exist, or are super rare in some global sense. Just that they are vastly outnumbered mathematically if you consider the full set of ordinary beliefs that people have continuously throughout the day that let them function as human beings.
First and foremost, this seems absurdly difficult to measure rigorously. It is easy to determine whether someone professes a belief, you just ask them on a poll. It is highly nontrivial to determine whether someone "truly believes" something in the way you describe in any sort of objective sense. You can make a bunch of inferences that you think ought to logically follow from the true belief and also ask them about those on a poll, but that's incredibly subjective in what "counts", and someone with genuine true belief could disagree with some of your logical implications or disagree with those particular statements because they also have other beliefs you didn't expect them to have. And someone without genuine true belief could agree with those statements for other reasons.
Similar issues come up if you try to track real world behavior like "does this person buy a gas guzzling car?" Maybe they really believe in climate change but they're just selfish and care more about their own convenience. Maybe they have a consistent belief that only 1% of people with the most demand for hauling heavy things around should have large trucks and they genuinely believe they qualify as one of those people. Maybe that belief is in part selfishly motivated but in part genuine and it's not a binary thing. Similarly, lots of people who don't believe in climate change still have low carbon impact simply by coincidence. Any attempt to measure hypocrisy is going to be incredibly subjective and could turn out completely different answers based on the methodology.
Second, I think a lot of the perceived sparseness is availability bias. You are thinking of positive examples where people have hypocritical-seeming behavior, and controversial issues that people disagree on, but if you look at a broader and less interesting class of beliefs I expect you'd find 99%+ of beliefs are genuine. Everyone believes the sun will come up tomorrow, and acts accordingly. Everyone believes that wearing clothes in public is good behavior, and acts accordingly. Everyone believes that using doorknobs is the optimal way to open most doors, and acts accordingly. There are millions of minor facts that everyone genuinely believes in, acts as if they believe in, and take for granted, not even thinking about except when educating children. It's only concepts which are controversial, which some people do and some people do not believe in, where your attention is drawn when making these considerations. So if you're trying to make some sort of claim about rarity of genuine beliefs you need to be careful about what class of beliefs you are considering.
Additionally, controversial issues where there is mixed evidence are precisely issues where a good Bayesian ought to have a nontrivial probabilistic belief. Maybe someone thinks there's a 60% chance that antropogenic climate change is a big problem, and so they do some high efficiency efforts that they think have a high value per cost, but not others because the expected value is lower than someone with a 99% belief would perceive. Does this 60% belief count as "genuine?" And would your study be able to tell the difference between that and someone with a hypocritical professed 99% belief?
In theory something along the lines of your study, done extremely carefully, could be useful. In practice it is incredibly likely to be muddled with subjective biases to the point of unusability except as a cudgel by some people to bash their political opponents with and call them hypocrites with "scientific evidence", and nobody learns anything they didn't already know.
I think this is just a disagreement of semantics: you and he are using slightly different definitions for the word "lie". You are using it to include any form of dishonesty, or at least the forms the media uses, while Scott is using it to mean literal false statements of fact which are objectively disprovable.
"I care because there’s a lazy argument for censorship which goes: don’t worry, we’re not going to censor honest disagreement. We just want to do you a favor by getting rid of misinformation, liars saying completely false things. Once everybody has been given the true facts - which we can do in a totally objective, unbiased way - then we can freely debate how to interpret those facts."
He made (at least part of) his point in this piece. It isn't "the media never lies therefore you can totally trust them." His point is "honesty is a gradient such that it's possible to be partially dishonest but not outright lying. Dishonesty in the media is subtle and ambiguous. This makes it impossible to make unambiguous censorship rules which are both effective at handling misinformation and impossible to abuse".
The media is dishonest in subtle ways without lying, therefore preventing misinformation is much harder than simply censoring/punishing lies. But censoring/punishing editorializing probably goes too far and prohibits any meaningful discourse beyond bare bones fact reporting.
It is quite questionable whether any of this target audience reads Scott, though.
At the very least it arms the audience he does have. That is, people who oppose censorship and read his piece will have a better understanding of this argument against it. And then upon encountering someone who is pro-censorship they can more lucidly argue the point and/or drop a link to this article.
I am a human. I have creativity. What I don't have is the talent/skill to bring the things I imagine to life without it looking incredibly misshapen, distorted, and low-quality.
Kickstarter is against the side of my creative work and me as a human who would be behind that work if I had AI assistance. This seems like the classic problem of the existing industry have superior lobbying power against potential future industry which doesn't yet exist as an organized group and doesn't have lobbying power.
-
The same would apply to a lesser degree when working for the government in a prominent position.
-
Conservative women's clothing exists: women go to court all the time. A non-binary/gender fluid/trans person who stuck to their guns that they are genuinely a woman and didn't believe their gender nonconformity was obnoxious and unprofessional could wear respectable women's clothing to court.
AI killing everyone directly is normal AI-risk topics. This isn't any different except that fanatics of any ideology tend to be less worried about collateral damage and so will be less careful
The most realistic bad route I see this particular flaw causing is something along the lines of:
Make lots of useful information stuff like Wikipedia and Google Assistant, also entertainment, chatbots, pretend-lovers for incels, etc, and fill it all with subtle, superhumanly persuasive leftist propaganda.
Overton window shifts left even faster than it does now, Democrats take over everything, but eventually get replaced with even more extreme leftists and Marxists.
Literal communists get enough power to implement their revolution
Genocide/famine, because that's what communist revolutions always do.
Social and political discourse normally has negative feedback loops. The more insane stuff people do, the more pushback they get until eventually they're forced to stop. If you censor stuff the feedback weakens considerably and people can get away with a lot more insane stuff before anyone can notice or coordinate with each other to stop it. AI doesn't have to kill us all if we just do it to each other.
More options
Context Copy link