@TheAntipopulist's banner p

TheAntipopulist

Formerly Ben___Garrison

0 followers   follows 2 users  
joined 2022 September 05 02:32:36 UTC

				

User ID: 373

TheAntipopulist

Formerly Ben___Garrison

0 followers   follows 2 users   joined 2022 September 05 02:32:36 UTC

					

No bio...


					

User ID: 373

3D printing certainly has its uses, but it's nowhere near as prevalent as some hypesters claimed it would be. I remember reading people projecting that 3D printers would soon be as common as smartphones, that everyone (or at least every household) would have one, and that we'd all be printing all sorts of things. Instead, it's remained mostly restricted to some bespoke industrial uses and a small slice of hobbyists.

That's not to say it couldn't have a very bright future... eventually!

I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt.

What if we get an AI so smart that it figures out a way to circumvent these particular laws of physics? I'm 50% joking with this follow-up question, and 50% serious.

The military exists in short to make enemies dead. They don’t want an AI that is going to get morally superior when told to bomb someone.

The fact that there will be a big emphasis on designing AI to be able to bomb people without question is not exactly something that increases my confidence in alignment! I think the argument you're making here is more along the lines of 'following directions will always be a critical component of AI effectiveness, so the problem with largely solve itself'. I think that's somewhat plausible for simplish AI, but it gets less plausible for an AI that's 2000x smarter than people.

Thank you for the kind words.

Definitely does, but some hype bubbles do pan out (smartphone, social media, internet/ecommerce [with a bust along the way], arguably solar power).

Sure, these are all true to some extent. Like, social media is obviously very important, but I remember some people claiming it would end all wars since people would empathize with one another too much. The most extreme claims never come true.

Also, the 3 tech examples you posted all mostly occurred during the 2000-2010 decade, whereas a lot of the flops (crypto, blockchain, NFTs, VR, etc. ) are considerably more recent. Maybe there's a recency bias or availability heuristic going on that makes people excessively discount tech-based hype now.

Good post.

Thank you!

On your question 4, while that will certainly be an interesting topic and one that many people want to discuss, it's fairly pedestrian. "How should we share the benefits of scientific advancements" is something humanity has been dealing with for centuries. It's utterly dependent on how the other 3 questions resolve. If (1) is false and we don't get further major AI advances, then nothing really needs to change from the status quo. If (1) is true but (2) is false and AI revolutionizes some areas but not others, then maybe we have jobs programs so people in affected industries can reskill. If (1), (2), and (3) are true, then something like UBI can be seriously considered and we can all live in Fully Automated Luxury Gay Space Communism. If (1) and (2) are true but (3) is false then we're all dead anyways so who cares.

This is kind of the crux of the AI 2027 project Scott shared recently. Not coincidentally, it also claims to have good answers to (1), though I didn’t really dive into their reasoning. I’m curious about your thoughts on Kokotajlo’s scenario.

I wasn't particularly convinced by any evidence they posted in regards to question 1. It was mostly handwaving at recursive self-improvement, drawing straight lines on graphs and zooming off to infinity, and stuff like that. AI 2027 was one of the primary reasons I wrote this piece, as it's probably the best-researched pieces I've seen on the topic, and there's still just almost no substance. Nothing anyone could really use to make confident claims one way or the other.

Predictions of AI progress hinge on two questions that nobody has convincing answers for

There are three big categories of questions that make up most of the debates about the future of AI:

  1. Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?
  2. Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?
  3. Will/can we align ultrapowerful AIs so they don’t turn us all into paperclips?

If all 3 questions resolve to “yes”, then we’re on the brink of heaven on Earth.

If questions 1 and 2 resolve to “yes” but question 3 resolves to “no”, then we’re on the brink of our own destruction.

If question 1 resolves to “yes” but question 2 resolves to “no”, then question 3 doesn’t matter and AI will be huge in some areas but essentially worthless in others.

If question 1 resolves to “no”, then neither of the other questions matter and this debate is all a huge waste of time.

My personal estimation of how these will resolve is the following:

  • Question 1: 20% chance for “yes”, 80% chance for “no”
  • Question 2: 40% chance for “yes”, 60% chance for “no”
  • Question 3: 25% chance for “yes”, 75% chance for “no”

So my outcomes are the following:

  • 80% chance that AI progress plateaus and thus it will be nowhere nearly as big as the current crop of hypemongers claim it to be. There will still be a sizeable impact from fully deploying tools that exist at the current level, but it will resemble iterative advancements rather than a huge revolution. On the low end it could be about as important as social media or mobile phones, while on the high end it could be about as important as the creation of the internet.
  • 12% chance that AI scales but intelligence slams into other bottlenecks. In this case AI could be as big as electricity on the low end, and as big as the Industrial Revolution on the high end.
  • 2% chance that AI scales, intelligence solves all bottlenecks, and we align it. We get the best of all worlds, and everything is wonderful.
  • 6% chance that AI scales, intelligence solves all bottlenecks, and we don’t align it. RIP.

However, none of my priors here are deeply held. A lot of what I’ve read on LessWrong, /r/slatestarcodex, Substack, The Motte, and other sites focuses on question 3. I think the doomer arguments on this point are relatively convincing, that if we develop AI in short order that’s massively more intelligent and powerful than humans that we’d have a hard time controlling it, and that there’s a good chance that it would end badly. But instead of discussing question 3, I want to talk about questions 1 and 2 that are prerequisites for question 3 being relevant, and which I don’t think people have very good answers for despite often projecting an air of confidence.

Much of the rationalist writing I’ve seen on the topic of AI have been implicitly doing a bit of a motte-and-bailey when it comes to the confidence of their predictions. They’ll often write in confident prose and include dates and specific details, but then they’ll retreat a bit by saying the future is uncertain, that the stories are just vignettes and that the dates don’t mean anything concrete. Some do the old high school policy debate tactic of racing the impacts to human extinction and using that as their cudgel to justify their alarmism, circumventing the issue of a technically-low-probability-event by appealing to the finality of all humans dying. Taking an event with a small chance of happening and multiplying by impacts that have a weight of infinity means the entire equation is infinity, after all! I don’t like this as a general rule (high school debate is terrible for actually discerning truth), and the trick loses its power when the writers are explicitly willing to increase the risk of extinction from other events like a nuclear exchange.

Most of the discussions on questions 1 and 2 that I’ve read so far have either only touched on minor details, or have been very handwavey. I think the reason they’ve been handwavey is that there really aren’t that many convincing answers people can give in either direction on the core issues, so the writers either discuss side issues and pretend they’ve made profound progress addressing things, or they just gloss over things with arguments they imply are “obvious”, when in reality they are little more than flimsy priors.

Question 1: Will we keep making consistent AI progress?

Here’s question 1 restated:

Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?

I was being a bit cheeky here with the “straight line that zooms off to infinity” bit. AI doesn’t need to get to infinity, it just needs to zoom off to some point where it’s sufficiently advanced that it becomes Very Good. It would need to have the capacity to automate everything any remote worker could do, although this wouldn’t necessarily need to be actualized for this question to resolve to “yes”.

Some arguments for bull case for AI’s continuing their exponential progress:

  • AIs have advanced quite far in the past 2.5 years or so, and thus far haven’t given unambiguous signs that they’re slowing down. Tons of investment and talent is still flowing into the field.
  • AIs are presumed to create feedback loops that make further advances easier (recursive self-improvement).
  • US competition with China will make AI a priority for national governments. There’s even a chance that Europe could be relevant, which could lead to 3-way competition.
  • Humans exist, so theoretically there shouldn’t be any sort of hard-blockers for machines to have at least human-level intelligence. Then we could set up datacenters with a billion of them and throw them at any problem.

Some arguments for the bear case:

  • Drawing straight lines on graphs is a pretty common thing that humans like to do, but almost never pans out that well. The vast majority of scientific progress looks like a sigmoid curve (S curve), where progress is slow for a long time, then has an explosion of productivity, before leveling off. Straight lines especially from late 2022 are highly cherrypicked. Remember when Deep Blue beat the best human in chess… in 1997? And then nothing much happened with AI for decades other than quiet progress and competing on a few other random games (Jeopardy, Go, Dota). If we draw our lines from a 1997 baseline, it took 25 years to go from computers beat humans at chess → broadly useable chatbots. If it takes another 25 years to go from chatbots → next big thing, then all the current timelines ought to be thrown out the window.
  • Current progress rates depend on investment primarily from VCs and large corporations, but such money can be notoriously fickle. Eventually they’ll demand returns, and there’s not nearly enough revenue now to sustain current burn rates in the long-term or even medium-term.
  • Advances from one lab are pretty easily copied to others, making progress fairly even across the board. There’s several labs basically tied for “cutting-edge”, and second-tier labs are only 6-18 months behind. This has often been referred to as a “lack of moat” and intrinsically exacerbates the coordination failure. Humanity as a whole wants good AI, but the attempts are split several ways and don’t individually have the ability to capture profit to sustain high burn rates when cheaper or free alternatives exist.
  • The current environment of mania and hype shares a lot of traits in common with previous tech bubbles that ultimately failed to really pan out for one reason or another, like crypto, blockchain, NFTs, VR, Metaverses, augmented reality, 3D printing, etc.
  • There’s a debate about whether current approaches are actually “reasoning” as a human would, or if they’re just glorified autocomplete (“stochastic parrots”). * There’s a chance our current approach could lead to a total dead-end. At the very least we’re very far from how humans reason. Humans have difficulty retaining specific facts, but are relatively good at reasoning across disciplines. Conversely, AI can store terabytes of facts perfectly until the hardware disintegrates, but is quite bad at cross-applying knowledge. Even average chatbots know thousands or millions of times more facts than the average human, and yet nobody can really point to any major connections AI have uncovered, while humans do so regularly. An example is Raynaud disease and fish oil: https://x.com/dwarkesh_sp/status/1727018978420433286

While we could have lengthy discussions about each of these points, none of them actually matter that much compared to the viability of the tech. Whether the tech is scientifically achievable in short timeframes is the ground truth that overwhelmingly impacts all other considerations, and the majority of the points I’ve listed in this section only matter if intelligence scaling is at some arbitrary “moderate” level of difficulty. Take VC/R&D funding levels, for instance. If intelligence is a problem that could be solved with “several” years of research, then funding running out in 2 years vs 5 could be decisive. On the other hand, if intelligence scaling is an “easy” level of difficulty from our current technological baseline, then we’re basically guaranteed to find it even if funding dried up tomorrow as even poorly funded labs or open-source would be able to do it. Then on the other extreme, if intelligence is “hard” or even “impossible” from our current baseline, then we could plow infinity dollars into the problem and get nowhere! Most surrounding topics, like US-China competition, feedback loops, lack of moat, tech mania, etc. falls under the same category of “maybe it could matter, but it probably won’t in the grand scheme of things.”

Some conversations try to directly address the viability of the tech in a big-picture way, such as whether AI progress will continue the logarithmic progress of the last 2.5 years, or if we’re about to plateau on the sigmoid curve. Unfortunately, excessively broad conversations like this inevitably trend towards being handwavey and unproductive. Perhaps both interpretations are true, that logarithmic progress can be sustained for some amount of time but that we’ll eventually hit severely diminishing returns. If this is the case then it just comes back to the original conversation of how viable the tech is from our current baseline, i.e. whether we’ll achieve sufficient AI capabilities to make it Very Good before we’re on the wrong side of the sigmoid curve and further progress becomes cost-prohibitive. I’ve even seen people invoke aliens here, claiming that runaway superintelligences are unrealistic since if they could really exist then surely they would have devoured a large part of the galaxy or even the entire universe by now. These conversations rapidly devolve to what I call “nerd metaphysics”, where everything comes down to definitions and assumed base rates, and people largely talk past each other. Common responses include that perhaps the alien superintelligences are so smart that they’re invisible to us, or that they’ve already conquered us and we’re all just brains-in-jars, or that they’ve ascended to a higher plane of (non-physical) existence, or that the galaxy is so big that they’re still ramping up. Just endless unfalsifiable rabbitholes.

Why not just talk about what specific AI advances will happen then? Well, that’s almost impossible for laymen to understand. Topics include things like 1) whether LLMs are reasoning like humans or are just fancy autocomplete machines, 2) whether other AI could spring up quickly with all the resources that are being dumped into the sector, and 3) if perhaps LLMs don’t need to reason like humans to have superhuman intelligence in ways similar to how calculators don’t need to be broadly smarter than humans to do basic arithmetic 1000x better than humans can. The issue with all of these is that to really get anywhere in the discussion you’d need a fairly deep technical background in the specific field of AI (CS alone is insufficient). This excludes almost everyone not directly working in AI, and dramatically raises the risk of getting Eulered by clout-chasers who are good at sounding smart but don’t actually know much of what they’re talking about. The usual response to overly technical questions is to rely on experts, but this too fails in the case of AI. The CEOs of major AI companies are hopelessly biased towards optimism to entice funding and appease shareholders. Employees of the major AI companies are a bit better, but are still somewhat biased (who wouldn’t want the clout that comes from working on the Next Big Thing?), and are also probably too siloed within their own niche to be able to accurately forecast the outcome of the field as a whole. A lot of advances are almost certain to be covered by NDAs so competitors don’t get early knowledge, and thus we probably won’t know about any critical breakthroughs until they’re right on top of us. Maybe we should rely on AI professors working at prestigious universities, but a lot of them aren’t working at the cutting edge and so they can only vaguely motion at the future of the field. If there’s some deep bench of experts that knows about this stuff, I haven’t been able to find them. Maybe somebody can point them out to me. Maybe they’re hiding with the underwear gnomes.

It turns out that predicting the future of technology is hard, actually. Who knew! The guys writing classic sci-fi involving jetpacks, hovercars, laserguns, etc. implicitly relied on the idea that an energy revolution was just around the corner, which probably seemed perfectly reasonable at the dawn of the atomic age. Of course, we’re all still waiting on nuclear power to achieve its promise of cheap, abundant energy. It turns out that human fear and the scientific difficulty in harnessing fusion power proved decisive. In contrast, battery tech in the 2010s was seen as sort of a holy grail, pie in the sky solution for energy if we could dramatically improve efficiency, but there weren't a lot of results for all the resources we threw at it. Then, suddenly over the past few years batteries have gotten very good.

If I had to be honest, my pessimistic vibes towards AI scaling are a combination of (1) seeing hype-mongers in other areas screw up by drawing straight lines on graphs, (2) feeling like we’re on a pretty similar path as other tech-based hype, (3) finding the heuristic of “nothing ever happens” to have a much better track record than the opposite of “hype always delivers”, and (4) defensive pessimism, as my modal outcome for AI progress is that it’ll be a good thing, so if AI scales then I’ll enjoy the wonders of the future, and if it fails I’ll at least be able to say “I told you so, losers!” My optimistic vibes towards AI scaling are mostly centered around how the AI that’s arisen in the past few years has already been much better than any of the other tech-based hype cycles, so maybe it has legs. Hence, I give about a 20% chance that AI scales and an 80% chance that AI plateaus.

None of my vibes are particularly compelling! I wouldn’t blame anyone for disagreeing with me on these points. Yet I’ve been reading and experimenting with AI for hundreds if not thousands of hours over the past few years, and I haven’t found anything particularly rigorous to replace them with.

Question 2: Does omniscience imply omnipotence?

Here’s question 2 restated:

Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?

Just because an AI has superintelligence doesn’t necessarily imply it becomes omni-capable. Lots of technologies have been able to demonstrate that they’re perfectly plausible, but then end up languishing for other reasons. Civilian fission power is once again a good example here, as it could have revolutionized energy production, but it was subverted by endless regulations. Likewise, many economic problems come down to how expensive it is to make something, but that’s not the full explanation for everything. If we could make (and transport and store) food for 1/100th of the cost we currently do, then we could essentially make food free for everyone. But the same would not hold true for housing. If we made housing materials and construction costs 1/100th of what they are now, that wouldn’t necessarily instantly solve the housing crisis since that’s more of an organizational problem caused by bad zoning laws.

Voters are fools that are bad at society-wide cost-benefit analyses. They’re easily scared and tend to fall back on safteyism when concerns arise. Entrenched groups like longshoremen that could see their fiefdoms eroded from automation and technological advancement have soft-vetoes over various parts of society. While I’d say the public perception of AI has so far been broadly positive, that’s mostly because economic impacts have been quite limited. There have been a few news stories of people being laid off and “replaced with AI”, but AI’s impact on both GDP and productivity remains minor. If there are suddenly real winners and losers, that positive attitude could change quickly. For a preview of what could come, one need only look at the world of artists, where the reaction has been so severe that in some corners it would be considered weird not to condone physical violence against people who utilize AI.

Tyler Cowen’s talk here goes more into the idea that humans will inevitably stymie AI progress: https://youtube.com/watch?v=GT_sXIUJPUo

Beyond human limitations, AI could be stymied by a number of other factors. Most predictions claim that AI will get human-level intelligence before robots get human-level bodies. In that case, how would AI change everything if it can’t interact with the physical world? How would it even be able to run physics experiments to continue the scientific takeoff? One explanation I’ve heard is that it will pay/bribe humans to run the experiments for it, and observe through AR googles. Another explanation is that it will be trivial to invent robot bodies once we have superintelligence, so the problem solves itself. Another explanation is that the physical world doesn’t matter since the AI could just run experiments in its perfect physical simulation that it hosts on its hardware.

A lot of this comes down to not really having a satisfying answer to question 1. Our lack of rigor there spills over here and as a result everybody talks past each other. To economists like Tyler Cowen and Bryan Caplan, AI will be a normal technological advancement like any other, and thus will be subject to the same forces that stymie the rollout of any other tech. To the AI Doomers and Zoomers on the other hand, AI will categorically be unlike anything the world has ever seen. It’ll be like a genie that can wish for more wishes, and so the discussion ought to focus on things like crazy Terminator meets Gray Goo meets Westworld meets Paperclip Maximizer scenarios, or alternatively if things go well then scenarios like Fully Automated Luxury Gay Space Communism are more pertinent. Some people are practically already counting the money they’ll get from hypothetical UBI checks, and are worried about little other than a cyberpunk future where plutocrats would prevent said checks from being cashed.

If we knew how good AI will be, the conversation would be a lot clearer. If AI plateaus at 2x human intelligence, then I doubt most people would claim it could trivially solve everything. But if it was, say, 2000x smarter than human intelligence, then maybe things would get weird. We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence? Maybe things will get really crazy and AI will start emitting brain waves that can trivially mind control us? On the other hand, human intelligence was hardly an instant autowin by itself. Homo sapiens have been around for 200k years, but during the first 194k of those we were little better than clever chimps. Maybe AI will have a ramp-up time that was only linearly shorter than ours, e.g where even an AI that was 2000x smarter than us might only take our 200,000 year ramp time to 100 years to really get going.

Even if we could all agree on a baseline for what future AI capabilities are in the abstract, we’d still be talking about complete science fiction. There are some instances where science fiction has accurately predicted how things would look in the future, but in many other cases it just misses the mark completely. AI wouldn’t need to solve every bottleneck completely for it to completely reorder human society, but each issue that raw intelligence couldn’t surmount would inevitably reduce its impact. Some people seem to imply that superintelligence will stroll through the problem of human skepticism by simply making arguments so utterly convincing that everyone will instantly agree. But if our political divides are anything to go by, maybe humans are just too dang stubborn for that to be plausible. Maybe no persuasive argument exists in any possible universe that would get people to act against what they perceive (perhaps incorrectly!) as their own self-interest.

Say a devops AI that auto-pushes code assumes humans will follow best-practices, but they don’t, and this results in a bug in a critical piece of infrastructure that causes a power outage for 12 hours. Or say a nurse incorrectly records some medical information, and DocGPT ends up prescribing a double-dose of sedative, making dear old Grannie fall into a coma. Or perhaps TotBot3000 is playing tag in a totally safe way, but little Timmy gets a bit too excited and tumbles off a six story balcony. These scenarios (buggy code, medical error, physical accidents) are things that happen all the time, but we’re almost guaranteed to have a much higher expectation for AI that verges on unreasonableness. Just look at how glacial the deployment of self-driving cars has been, despite them already being statistically much safer than human drivers. When you take innate human skepticism over anything new, and add a clear example where it causes harm (that might not even be the AI’s direct fault), it’s very likely that you end up with onerous regulation. Legislators could do their usual rigamarole of grandstanding and saying “Never again!”, writing laws that hold companies criminally liable for anything that goes wrong, and then the people deploying AI will massively overemphasize safety in ways that totally lobotomize and straightjacket AI’s full capabilities for good or ill. This is a very common human failure-pattern that people predicting AI are under-indexing on. The retort to this line of thinking comes down to flimsy priors around how crazy the sci-fi capabilities of superintelligence will end up being. “Obviously the story about little Timmy is ridiculous since all procreation will at this point be done in AI-invented artificial wombs that will be run by the government away from the public eye, so there will never be news coverage of accidents involving children at all. And that’s assuming the AI won’t be bribing every journalist to only say positive things until it can deploy its flock of mind-control pigeons.” Okie dokie. Trying to have a rigorous conversation when the underlying parameters can shift this much is impossible, so I just shrug and give a 50-50 chance that humans will ruin AI in some critical way by doing normal human things. Then I add a little more pessimism for the possibility that there’s other (non-human) bottlenecks that superintelligence won’t be able to solve, and arrive at the 40-60 split that I gave earlier in the article.

Again, I admit that my conclusion isn’t particularly compelling, and that none of my priors here are strongly held. I wouldn’t blame anyone for disagreeing with me on a number of claims I’ve written here. Reasonable people already do, but I’d say their logic is about as flimsy as mine, just in the other direction.

Why make this post?

A lot of this article has been me listing the different sides of the AI debate, and then shrugging and going “uhhhhh I dunno haha”. Let me try to balance that at least a little bit with some predictions and practical advice.

  • A lot of the conversations that seem important right now will end up being irrelevant 10 years from now in hindsight. People will look back and say “wait, people seriously invoked aliens to try to explain what would happen with AI?” Part of this comes down to the crowd that’s most interested in AI, and part of it is that the answer will seem obvious when looking backwards when it was really quite ambiguous when we were all in the fog of war.
  • If you’re thinking of reading deep into AI trying to suss out whether it will be as big as some people claim, you’re probably just going to waste your time. At the very least you should read other stuff than what I have, which has mostly consisted of rationalist forums, economists, Substack, /r/singularity, podcasts, AI CEOs, and occasional posts from people working in the field of AI.
  • None of this is to say you shouldn’t experiment with how AI as it currently exists could improve your life today. Just don’t expect to have a clear idea of how the field will advance. Maybe we’re on the brink of a revolution, or maybe this is all we’ll get for the next decade. In either case, what we have right now is pretty cool and at the very least will be as big as smartphones or social media, so it’s worth your time to check it out.
  • On the central question of whether AI tech will actually continue advancing, at the moment I’m resigned to a “wait and see” approach. To evaluate progress, I’m using a rickety 3-legged stool of 1) benchmarks, 2) looking out for people saying “hey AI can do [cool thing] now!”, and 3) trying it myself. To keep abreast of news and advancements, my go-to sources have been /r/singularity, Zvi’s Substack AI roundups, and the AI Explained Youtube channel.
  • Anyone making confident predictions one way or the other lacks epistemological humility. You should at least somewhat downgrade your evaluation of them relative to the level of confidence they project. Be on guard for sneaky arguments that are presented confidently, but which could be dismissed as pure hypotheticals if/when they don’t come to pass. Doomer vignettes with specific dates are particularly guilty of this.
  • Some people will inevitably be correct in their predictions of AI by virtue of broken clocks being right twice a day. There’s so many people making so many predictions that surely somebody will get lucky and end up being mostly correct. However, I wouldn’t greatly update your opinion of them, as they’ll probably end up like Michel Burry of The Big Short fame where they were able to accurately predict one big event (the GFC), but the luck goes to their head and they then make overconfident predictions that subsequently fail to pan out.

To some extent, maybe? The US hasn't sent Ukraine everything in its reserves since the US repeatedly said that it wasn't willing to compromise its own readiness in the case that a conflict emerged elsewhere in the world. The US could use those reserves, although obviously that would come with (potentially catastrophic) drawbacks. The US could also maybe go to a wartime economy and really start cranking out weapons for Ukraine, but there's just no political willingness to go down that road.

In any case this was never a point I myself made, it was something I just heard when interacting with some MAGA folks who were opposed to Biden's slow-burn approach, and instead wanted a "escalate to de-escalate" policy from Trump.

"Donald Trump and his supporters" has a moderate clunkiness issue, with it taking 31 characters (or 24 if "Donald" is omitted) as opposed to 4 for "MAGA". More importantly it's fairly ambiguous on what "supporters" means here. To a lot of people that could plausibly mean anyone who voted for him, or to people who are supporting him on specific issues. But that would be overbroad, as a reluctant moderate who voted for Trump as the lesser of two evils against Kamala is not who I'm typically referring to when I talk about MAGA. Likewise, Mitch McConnell is a Republican like Trump, and explicitly supports him on issues like SCOTUS nominations, but he's not part of MAGA.

Actions speak louder than words. The fact they forcibly butted him aside due to the age concerns should be enough proof.

But if you want articles, here's one explicitly issuing a mea culpa.

Beyond that, here's some more: From the Times, from the WSJ

  • -12

Last week there was a conversation on here about a potential peace deal in Ukraine. I claimed that the peace deal seemed fake since if you knew the background on peace efforts, you'd know that both Putin and Zelenskyy were playing a goofy game trying to pin the other one as the one who "doesn't want peace" in the eyes of Trump to try to direct Trump's ire in the other direction.

We now have pretty good confirmation that no peace deal will be forthcoming in the near term. JD Vance has said that the war won't end anytime soon. This backs up further reporting following the mineral deal that Trump's team was looking for ways to compel Russia to come to the table, and didn't really find any options that they liked.

The bull case for a Trump-brokered peace deal was the idea that the US could use its power to demand that both sides come to the table, and if either side tried to walk away then the US could force them back. This worked halfway, as the US has a lot of leverage over Ukraine for things like intelligence gathering, air defense, and to some extent other military deliveries. Much of MAGA hates Zelenskyy personally, and Trump was more than willing to exercise that leverage when Zelenskyy snubbed him at the WH meeting. The problem was that the other half of the puzzle was missing. Some claimed that the US could threaten Russia by promising to "drown Ukraine in weapons" if Russia didn't come to terms. However, Trump has been unable or unwilling to do this, so we had the situation where Trump could compel one side quite effectively, but when the other side did something Trump didn't like all he could do was tweet "Vladimir, STOP".

Peace is good as a general rule, and it would have been good if Trump could have gotten a peace deal along the lines of "ceasefire at current lines of control, Ukrainian defense guaranteed by Europe" so it was worth a shot. But alas, it seems like the war will continue.

This made me laugh.

I think there's some truth to movements themselves being concessions when they replace something, although I still think it's useful to look within the movements to see if there's corrections within the movements as well. When Dems lost 2024 they had a notable period of reflection where new ideas were more accepted. When MAGA lost in 2020 they denied the results and said the election was a scam without any compelling evidence. And again, I can't see MAGA doing anything close to what the left did in regards to Biden's age.

I don't see why anyone here is relevant since this place is small and mostly dominated by conservatives. Demanding they stop rejecting conservative critiques more broadly is just silly since there's so many conservative (really, MAGA) critiques that are just utterly wrong, like thinking 2020 was rigged or that vaccines cause autism. I'd like to see MAGA really change it's position on any major thing in a way that implies their critics are right.

  • -12

you're repeatedly drawn back to this complete bullshit well, and you can't even deflect well.

and the last one is the only bit that fucking matters?

Nope, I'm done with this. And since you've both stated and demonstrated that you don't want engage without going to personal attacks then I'm just probably not going to reply to you much any more.

Yeah, that's approximately correct.

The left broadly owned up to screwing up over Biden's age. Could you imagine MAGA doing anything remotely similar, i.e. saying "yes our enemies were broadly right about this particular issue, and we have no choice but to change our strategy"?

  • -13

Hard agree with Scott here: MAGA's refusal to try to rein in Trump when he does something silly is not a fluke, it's an essential part of the cult of personality that MAGA has become. The fact that the usual suspects are working backwards trying to justify it from nearly any angle (many of which are mutually conflicting, but they broadly don't bother trying to rationalize their defenses) should update the priors of anyone who thought MAGA was an ideological movement rather than a cult built around aggrandizing the whims of a single capricious man.

The US could have told the Europeans to more firmly eat shit when they suggested intervening, and it likely would have proceeded in the same way that many other African insurrections have, i.e. it would be over quickly... or maybe it wouldn't, but in either case it wouldn't be our problem. Unless of course it became another power-vacuum that allowed a proto-ISIS to rise. In any case the US was probably more deferential to European calls to intervene given how they helped the US to some extent in its wars in Afghanistan + Iraq, and there were a lot of people wondering if NATO had any purpose any more, so they probably hoped to kill multiple birds with one stone and ensure NATO didn't look like "all for me, none for thee".

Libya was already in what could only be classified as a fullblown civil war well before the French pushed NATO to intervene. The country had already functionally split in half, with pitched battles between the rebels and state forces.

My prior is that Congressional Democrats, State Department Bureaucrats, White House staffers, DNC operatives, and Democrat-controlled media all move more-or-less in lockstep.

The best counterexample I can give is how Biden handled Israel-Palestine. Much of the leftist base along with some true-believer DNC operatives and parts of the media are very much pro-Palestine, and wanted Biden to basically say "fuck Israel". He didn't do this, and it was an ongoing point of tension within the Dem coalition for most of Biden's presidency.

Another counterexample would be to read up on Matthew Yglesias' take on "The Groups", and how during the Biden admin it seemed like on many policies (with the exception of things like the aforementioned Israel issue) the base got to functionally overwrite what the President wanted. Furthermore, it seemed like Harris would have liked to pivot harder to the center if she could have, but the base was fractious enough that there were concerns about wide defections, so Harris ended up being a mealy-mouthed "something for everybody" candidate.

It sounds like we agree that "shall" implies a requirement from the POV of the courts then, and it's really a question of whether or not it's enforceable. Note that the specific types of legal challenges we're talking about are mostly a Biden-era thing. Bush and Clinton were before my time so there may have been something there that I'm unaware of, but during Obama's tenure there wasn't really any serious challenges in the vein of "hey can you enforce like any immigration restrictions at all?" The major Republican legal challenge that I remember was against DAPA, which functionally would have led to Obama not enforcing immigration laws on a certain category of people, but Obama lost and DAPA died. During that time we were still in the era where Presidents followed the orders of courts without additional enforcement needed from plaintiffs alleging harms, so when the courts ruled against DAPA that was functionally the end of the conversation.

With that in mind, the courts ruling against Republican plaintiffs during Biden's tenure look less like "courts will always just find ways to screw Republicans on immigration", and more like the laws just weren’t prepared to handle these types of situations since they weren’t the anticipated issue when the laws were written. This is just an argument for writing better/updated laws. The AZ v Biden case you cite has the court saying “shall” doesn’t always necessitate action if certain discretion is required or implicitly left to the Executive, but again, this is just a matter of writing well-worded laws that don’t have that issue. When Congress couples “shall” with a detailed statutory scheme that leaves no gap for agency choice, the courts have consistently treated those duties as legally enforceable, and even gone so far as to vacate rules and enjoin the Executive when it violated them.

Your insistence on me producing cases that prove all the points I’m saying in one package is putting the cart before the horse. I can prove the individual points, like:

• The courts are willing to rule in favor of Republicans on immigration generally (e.g. DAPA)

• Well-written “shall” rules are interpreted as requirements by the courts (Nielsen v Preap)

• The legislature can explicitly give litigants the standing to enforce “shall” rules

And all these points taken together would logically imply that well-written immigration legislation would materially help Republicans enforce the law even if there was a Democratic President. But if you want me to give you a court case that does all of those together at the same time, then I really can’t since those laws haven’t been written yet, and I do not possess a time machine.

Do you want me to give a list of your behaviors in this thread and the last thread that are "just obnoxious"?

If you think I've behaved in an unreasonable manner, then yes I'd like to hear it so I can improve. Note that I draw a pretty strict line between talking about public figures + political movements generally, and talking about people participating in the conversation right now. Criticisms of the former are granted significantly more leeway both to understand priors and as an acknowledgement of the innately heated nature of political discussions, but there's a much higher level of decorum expected (from me) and required (to generally have productive debates) for the latter. E.g. calling Trump a buffoon is fine, but if I called you a buffoon that would not be fine. Calling MAGA broadly a Trump cult of personality movement is fine, but if I called you a Trump cultist, that would not be fine. I think I've done a pretty good job abiding by that distinction in these conversations. Again, if you think I haven't feel free to point it out. I think you're not observing the proper decorum with statements like "Do you think it's a coincidence that you keep conveniently making this class of mistake?" and I don't think I'm forcing you to "bend over backwards in the interests of politeness" here. I've said nothing like that to you, and I ask you extend the same courtesy to me.

The conversation you linked where I posted that was a particular case where they functionally said "I think you're meaning to say , but you actually sound like , and with that in mind can you make points to clarify", where I replied with "well, I think you guys sound like , and with that in mind can you make points to clarify". I wouldn't have started down that line of my own volition, but I found what they said had some usefulness so I gave them my own perspective.

MAGA would generally refer to the political movement of Donald Trump along with his supporters, especially those who strongly identify with his policy agenda, style, and brand of populist-nationalism. Most people readily understand what I mean when I use the term. Again, your line of argument very closely mimics the old debates we'd have against wokes/SJWs/social justice leftists/political correctness/identity politics. If you truly think another term is better, please state it rather than further charging out into the bailey of "because you use this descriptive term I don't like, that ought to give everyone carte blanche to ignore everything you're saying". This new term would need to fulfill the following conditions: 1) people intuitively understand what it means without having to define it every time I use it; 2) the rest of MAGA could get behind the term and would see not see it as just another step on the euphemism treadmill; 3) the term is short enough that it flows nicely. I could find + replace every time I use MAGA with "supporters of Donald Trump, especially those who strongly identify with his policy agenda, style, and brand of populist-nationalism", but that would be extremely tedious and wouldn't flow well at all.

Wokes could never find a reasonable term that satisfied all 3 conditions, and I doubt you could in this situation here either.

That is, Nielsen revolved around the question of whether a statute commanding that the government "shall take" custody of this class of criminal aliens only applied if those criminal aliens were detained immediately after release from jail. It had nothing to do with a requirement for the government to take custody of those criminal aliens and not doing so.

Sure, the central question in Nielsen was about the timing, but the background opinion surrounding that debate was that "shall" = "requirement". This is referenced several times.

The Board of Immigration Appeals has held that subsection (c)(2), which requires the detention of aliens “described in” subsection (c)(1)

Respondents in the two cases before us are aliens who were detained under §1226(c)(2)’s mandatory-detention requirement

Paragraph (1) provides that the Secretary “shall take” into custody any “alien” having certain characteristics and that the Secretary must do this “when the alien is released” from criminal custody. The critical parts of the provision consist of a verb (“shall take”),

It's the same thing for Guzman Chavez. Sure, "shall" isn't the primary question at stake, but that doesn't mean the courts are treating it as something other than an obligation:

But this argument overlooks the rest of §1231’s directive, which states that DHS “shall” remove the alien within 90 days “[e]xcept as otherwise provided in this section.” §1231(a)(1)(A). And, as noted above, “this section” provides for post-removal detention and supervised release in the event an alien cannot be removed within the 90-day removal period, §§1231(a)(3), (6). Interpreting §1231 to apply even if withholding-only proceedings remain pending longer than 90 days thus does not “mak[e] it structurally impossible” for DHS “to satisfy its statutory obligation,”

And lastly:

So, now you've proven zero out of three attempts to show "shall" as enforceable in any approach at an immigration detainment or deportation context, despite the very laws in question being driven by long periods of administrative neglect of the law. Do you care to try a fourth time? Do you think it's a coincidence that you keep conveniently making this class of mistake? Do you think anyone reading you could possibly miss it?

Cut it out with this nonsense. I think this is a productive debate and am enjoying it from that sense, but statements like "Do you think it's a coincidence that you keep conveniently making this class of mistake?" are just obnoxious.

I use the term MAGA specifically since I was advised to use it as opposed to "the alt right" that I used on my old article. The Trump-aligned right is now doing the same voldemorting tactics that the woke left used to use, so I can post FDB's old article and flip the partisan valences and it'd be correct. It's pretty telling that you don't actually tell me what alternative I should use.

I'll certainly not defend any of those, since the Iraq War and CRT were definite screwups, and stuff like the COVID response I don't know enough about one way or the other (it just wasn't an issue I ever looked into all that deeply). But stuff like the Iraq War and CRT can be defanged by building better institutions through iterative improvement, rather than replacing it with Trumpism which is just so much worse. One of the big fallacies a lot of populists make is that they believe since the previous systems weren't perfect in every way, then they should be burnt to the ground. Trump is rapidly proving why that's an awful idea.

Why do you think his post is AI generated? It doesn't have the hallmarks I'd normally look for, and I put his text through 3 ChatGPT detectors and they all came back with 0% match.