@TheAntipopulist's banner p

TheAntipopulist

Formerly Ben___Garrison

0 followers   follows 2 users  
joined 2022 September 05 02:32:36 UTC

				

User ID: 373

TheAntipopulist

Formerly Ben___Garrison

0 followers   follows 2 users   joined 2022 September 05 02:32:36 UTC

					

No bio...


					

User ID: 373

I'd wager most of it is simply from the decline of monogamy. In the West, both men and women are regressing to their biological sexual imperatives where men want to have sex with lots of women and women want to clinch commitment from a high-value man. This creates an adversarial relationship with many people left out in the cold: low-value men are rejected by most women outright, high-value men have a huge abundance and can treat women like sentient fleshlights, and women in general target the top quintile of men who treat them as disposable goods, or else they have to brave the lower quintiles full of "creeps". The best soothing balm for the battle of the sexes is simply being in a healthy long-term relationship, as it's hard to have a war when all the soldiers are fraternizing with the enemy. But the decline of monogamy has had a catastrophic impact on the rate of healthy relationships.

There are other more minor factors, e.g. the Internet has let people delve into their niche interests harder than ever, which has led to men and women dividing from each other more. I, for example, spend much of my free time playing grand strategy games and discussing culture war + philosophy with strangers in a rationalist framework. Women probably make up <5% in any of these areas, and as such I barely have any meaningful interactions with women since I graduated college. I didn't make a conscious effort to weed women out of my life or anything, I just focused on the things I was most interested in. I do interact with women a bit at work, but modern white collar environments are completely sterile so that hardly counts for much. In any case, two groups dividing from each other doesn't do wonders for understanding between them.

There's also the impact of third wave feminism implicitly branding most men as latent rape machines and red-pill/incel communities treating women like drones, which also doesn't help.

But yeah, it's mostly declining monogamy.

From college to dating to jobs, no one in history has been rejected more than Gen Z

This is an interesting article about the trend of mass-applications that has become increasingly normalized across many areas of life. If you've applied for a job in the past decade or so, you'll know that the signal:noise ratio is very bad, and as such you're kind of expected to mass-apply to dozens or hundreds of jobs. Each job will get bombarded with something like 1000 applicants in the first few days, and while many of those applicants will be junk, there will probably be at least a few dozen high-quality candidates that you're competing with. This has led to companies becoming extremely picky. In my specific area of tech, its led to an expectation that you need to do dozens of hours of "leetcode", which are little toy problems that are ostensibly used to make sure you actually know how to program, but which actually do a terrible job at this because real programmers will usually be somewhat bad at these, while people who grind leetcode but know little else can do quite well. There's also a further expectation that you might be asked to do other ridiculous feats like have 8+ rounds of interviews for an entry-level position, and you might be ghosted at any point in this process, even after you've interviewed with real people. Heck, you might even be ghosted after you've received and accepted a formal job offer, then if you show up to work the company will just lie and say they have no idea who you are. While there's theoretically some recourse by suing for promissory estoppel, it's almost never worth the effort so it rarely happens. The accepted answer is "that's just part of the game now, swallow your pride and move on".

Dating, and to some extent college applications are also like this. Young people live in a world where they constantly have doors slammed in their face. While I think a little bit of rejection can be good to build resilience, I doubt humans are psychologically well-equipped to handle the barrage of rejection that's become commonplace. Getting rejected hurts even if it's just a small annoyance from not receiving a response. It makes you feel like you're being treated like garbage a little bit, which would almost certainly prompt some amount of nihilism after a while. It might also lead to some amount of risk aversion. I myself simply refuse to deal with online dating at all, which has dramatically limited my romantic options. But if dying alone is the price required to remove this nonsense from at least one aspect of my life, that's a deal I'd gladly take.

This strike is comically atrocious PR for unions generally. Led by a fat entitled brat that looks like an IRL interpretation of Evrart Claire, with a millionaire salary who just so happened to start the strike one month before the election and having been photographed having a meeting with one of the candidates. He even has connections to the mafia and an unsolved murder (????) hanging over his head too.

Basic longshoremen themselves have had all sorts of suspicious stories come out, like how they get paid half a million per year to wash trucks, get fired for not showing up, and then rehired anyways due to their connections. Their salaries are also sky-high. They'll claim it's because they're working overtime since they're short-staffed, but it's an open secret that the union will only let you join if you have dynastic connections. They're also aggressively opposing automation in this strike as the cherry on top. Just a magnificent feast of hypocrisy, whiny entitlement, and rent-seeking.

Unions are good if they're counterbalancing employers' naturally higher market power, but unions that are too powerful are functionally just parasitic cartels that make society worse off for everyone.

Predictions of AI progress hinge on two questions that nobody has convincing answers for

There are three big categories of questions that make up most of the debates about the future of AI:

  1. Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?
  2. Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?
  3. Will/can we align ultrapowerful AIs so they don’t turn us all into paperclips?

If all 3 questions resolve to “yes”, then we’re on the brink of heaven on Earth.

If questions 1 and 2 resolve to “yes” but question 3 resolves to “no”, then we’re on the brink of our own destruction.

If question 1 resolves to “yes” but question 2 resolves to “no”, then question 3 doesn’t matter and AI will be huge in some areas but essentially worthless in others.

If question 1 resolves to “no”, then neither of the other questions matter and this debate is all a huge waste of time.

My personal estimation of how these will resolve is the following:

  • Question 1: 20% chance for “yes”, 80% chance for “no”
  • Question 2: 40% chance for “yes”, 60% chance for “no”
  • Question 3: 25% chance for “yes”, 75% chance for “no”

So my outcomes are the following:

  • 80% chance that AI progress plateaus and thus it will be nowhere nearly as big as the current crop of hypemongers claim it to be. There will still be a sizeable impact from fully deploying tools that exist at the current level, but it will resemble iterative advancements rather than a huge revolution. On the low end it could be about as important as social media or mobile phones, while on the high end it could be about as important as the creation of the internet.
  • 12% chance that AI scales but intelligence slams into other bottlenecks. In this case AI could be as big as electricity on the low end, and as big as the Industrial Revolution on the high end.
  • 2% chance that AI scales, intelligence solves all bottlenecks, and we align it. We get the best of all worlds, and everything is wonderful.
  • 6% chance that AI scales, intelligence solves all bottlenecks, and we don’t align it. RIP.

However, none of my priors here are deeply held. A lot of what I’ve read on LessWrong, /r/slatestarcodex, Substack, The Motte, and other sites focuses on question 3. I think the doomer arguments on this point are relatively convincing, that if we develop AI in short order that’s massively more intelligent and powerful than humans that we’d have a hard time controlling it, and that there’s a good chance that it would end badly. But instead of discussing question 3, I want to talk about questions 1 and 2 that are prerequisites for question 3 being relevant, and which I don’t think people have very good answers for despite often projecting an air of confidence.

Much of the rationalist writing I’ve seen on the topic of AI have been implicitly doing a bit of a motte-and-bailey when it comes to the confidence of their predictions. They’ll often write in confident prose and include dates and specific details, but then they’ll retreat a bit by saying the future is uncertain, that the stories are just vignettes and that the dates don’t mean anything concrete. Some do the old high school policy debate tactic of racing the impacts to human extinction and using that as their cudgel to justify their alarmism, circumventing the issue of a technically-low-probability-event by appealing to the finality of all humans dying. Taking an event with a small chance of happening and multiplying by impacts that have a weight of infinity means the entire equation is infinity, after all! I don’t like this as a general rule (high school debate is terrible for actually discerning truth), and the trick loses its power when the writers are explicitly willing to increase the risk of extinction from other events like a nuclear exchange.

Most of the discussions on questions 1 and 2 that I’ve read so far have either only touched on minor details, or have been very handwavey. I think the reason they’ve been handwavey is that there really aren’t that many convincing answers people can give in either direction on the core issues, so the writers either discuss side issues and pretend they’ve made profound progress addressing things, or they just gloss over things with arguments they imply are “obvious”, when in reality they are little more than flimsy priors.

Question 1: Will we keep making consistent AI progress?

Here’s question 1 restated:

Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?

I was being a bit cheeky here with the “straight line that zooms off to infinity” bit. AI doesn’t need to get to infinity, it just needs to zoom off to some point where it’s sufficiently advanced that it becomes Very Good. It would need to have the capacity to automate everything any remote worker could do, although this wouldn’t necessarily need to be actualized for this question to resolve to “yes”.

Some arguments for bull case for AI’s continuing their exponential progress:

  • AIs have advanced quite far in the past 2.5 years or so, and thus far haven’t given unambiguous signs that they’re slowing down. Tons of investment and talent is still flowing into the field.
  • AIs are presumed to create feedback loops that make further advances easier (recursive self-improvement).
  • US competition with China will make AI a priority for national governments. There’s even a chance that Europe could be relevant, which could lead to 3-way competition.
  • Humans exist, so theoretically there shouldn’t be any sort of hard-blockers for machines to have at least human-level intelligence. Then we could set up datacenters with a billion of them and throw them at any problem.

Some arguments for the bear case:

  • Drawing straight lines on graphs is a pretty common thing that humans like to do, but almost never pans out that well. The vast majority of scientific progress looks like a sigmoid curve (S curve), where progress is slow for a long time, then has an explosion of productivity, before leveling off. Straight lines especially from late 2022 are highly cherrypicked. Remember when Deep Blue beat the best human in chess… in 1997? And then nothing much happened with AI for decades other than quiet progress and competing on a few other random games (Jeopardy, Go, Dota). If we draw our lines from a 1997 baseline, it took 25 years to go from computers beat humans at chess → broadly useable chatbots. If it takes another 25 years to go from chatbots → next big thing, then all the current timelines ought to be thrown out the window.
  • Current progress rates depend on investment primarily from VCs and large corporations, but such money can be notoriously fickle. Eventually they’ll demand returns, and there’s not nearly enough revenue now to sustain current burn rates in the long-term or even medium-term.
  • Advances from one lab are pretty easily copied to others, making progress fairly even across the board. There’s several labs basically tied for “cutting-edge”, and second-tier labs are only 6-18 months behind. This has often been referred to as a “lack of moat” and intrinsically exacerbates the coordination failure. Humanity as a whole wants good AI, but the attempts are split several ways and don’t individually have the ability to capture profit to sustain high burn rates when cheaper or free alternatives exist.
  • The current environment of mania and hype shares a lot of traits in common with previous tech bubbles that ultimately failed to really pan out for one reason or another, like crypto, blockchain, NFTs, VR, Metaverses, augmented reality, 3D printing, etc.
  • There’s a debate about whether current approaches are actually “reasoning” as a human would, or if they’re just glorified autocomplete (“stochastic parrots”). * There’s a chance our current approach could lead to a total dead-end. At the very least we’re very far from how humans reason. Humans have difficulty retaining specific facts, but are relatively good at reasoning across disciplines. Conversely, AI can store terabytes of facts perfectly until the hardware disintegrates, but is quite bad at cross-applying knowledge. Even average chatbots know thousands or millions of times more facts than the average human, and yet nobody can really point to any major connections AI have uncovered, while humans do so regularly. An example is Raynaud disease and fish oil: https://x.com/dwarkesh_sp/status/1727018978420433286

While we could have lengthy discussions about each of these points, none of them actually matter that much compared to the viability of the tech. Whether the tech is scientifically achievable in short timeframes is the ground truth that overwhelmingly impacts all other considerations, and the majority of the points I’ve listed in this section only matter if intelligence scaling is at some arbitrary “moderate” level of difficulty. Take VC/R&D funding levels, for instance. If intelligence is a problem that could be solved with “several” years of research, then funding running out in 2 years vs 5 could be decisive. On the other hand, if intelligence scaling is an “easy” level of difficulty from our current technological baseline, then we’re basically guaranteed to find it even if funding dried up tomorrow as even poorly funded labs or open-source would be able to do it. Then on the other extreme, if intelligence is “hard” or even “impossible” from our current baseline, then we could plow infinity dollars into the problem and get nowhere! Most surrounding topics, like US-China competition, feedback loops, lack of moat, tech mania, etc. falls under the same category of “maybe it could matter, but it probably won’t in the grand scheme of things.”

Some conversations try to directly address the viability of the tech in a big-picture way, such as whether AI progress will continue the logarithmic progress of the last 2.5 years, or if we’re about to plateau on the sigmoid curve. Unfortunately, excessively broad conversations like this inevitably trend towards being handwavey and unproductive. Perhaps both interpretations are true, that logarithmic progress can be sustained for some amount of time but that we’ll eventually hit severely diminishing returns. If this is the case then it just comes back to the original conversation of how viable the tech is from our current baseline, i.e. whether we’ll achieve sufficient AI capabilities to make it Very Good before we’re on the wrong side of the sigmoid curve and further progress becomes cost-prohibitive. I’ve even seen people invoke aliens here, claiming that runaway superintelligences are unrealistic since if they could really exist then surely they would have devoured a large part of the galaxy or even the entire universe by now. These conversations rapidly devolve to what I call “nerd metaphysics”, where everything comes down to definitions and assumed base rates, and people largely talk past each other. Common responses include that perhaps the alien superintelligences are so smart that they’re invisible to us, or that they’ve already conquered us and we’re all just brains-in-jars, or that they’ve ascended to a higher plane of (non-physical) existence, or that the galaxy is so big that they’re still ramping up. Just endless unfalsifiable rabbitholes.

The AI 2027 project is by far the best and most rigorous take on the future of AI that I’ve seen so far. It was created by very intelligent people, and if you dig into the supplementary articles it becomes clear that they’ve strived to include almost every scrap of data they could get their hands on. Sure, a lot of it is random news articles and tweets instead of standardized datasets, but given that robust data on AI scarcely exists I’m confident that they’ve done the absolute best that they could. Nobody can accuse them of not doing their homework. Yet even though this is true, their predictions are still utterly dependent on the viability of drawing straight lines on graphs, and recursive self-improvement. If VC investors and corporate shareholders have had enough of the FOMO and start demanding returns rather than another year of “scaling”, all their predictions disintegrate. If recursive self-improvement isn’t as explosive as they assume, then Scott Alexander will have so much egg on his face that he’ll resemble a human omelette.

Why not just talk about what specific AI advances will happen then? Well, that’s almost impossible for laymen to understand. Topics include things like 1) whether LLMs are reasoning like humans or are just fancy autocomplete machines, 2) whether other AI could spring up quickly with all the resources that are being dumped into the sector, and 3) if perhaps LLMs don’t need to reason like humans to have superhuman intelligence in ways similar to how calculators don’t need to be broadly smarter than humans to do basic arithmetic 1000x better than humans can. The issue with all of these is that to really get anywhere in the discussion you’d need a fairly deep technical background in the specific field of AI (CS alone is insufficient). This excludes almost everyone not directly working in AI, and dramatically raises the risk of getting Eulered by clout-chasers who are good at sounding smart but don’t actually know much of what they’re talking about. The usual response to overly technical questions is to rely on experts, but this too fails in the case of AI. The CEOs of major AI companies are hopelessly biased towards optimism to entice funding and appease shareholders. Employees of the major AI companies are a bit better, but are still somewhat biased (who wouldn’t want the clout that comes from working on the Next Big Thing?), and are also probably too siloed within their own niche to be able to accurately forecast the outcome of the field as a whole. A lot of advances are almost certain to be covered by NDAs so competitors don’t get early knowledge, and thus we probably won’t know about any critical breakthroughs until they’re right on top of us. Maybe we should rely on AI professors working at prestigious universities, but a lot of them aren’t working at the cutting edge and so they can only vaguely motion at the future of the field. If there’s some deep bench of experts that knows about this stuff, I haven’t been able to find them. Maybe somebody can point them out to me. Maybe they’re hiding with the underwear gnomes.

It turns out that predicting the future of technology is hard, actually. Who knew! The guys writing classic sci-fi involving jetpacks, hovercars, laserguns, etc. implicitly relied on the idea that an energy revolution was just around the corner, which probably seemed perfectly reasonable at the dawn of the atomic age. Of course, we’re all still waiting on nuclear power to achieve its promise of cheap, abundant energy. It turns out that human fear and the scientific difficulty in harnessing fusion power proved decisive. In contrast, battery tech in the 2010s was seen as sort of a holy grail, pie in the sky solution for energy if we could dramatically improve efficiency, but there weren't a lot of results for all the resources we threw at it. Then, suddenly over the past few years batteries have gotten very good.

If I had to be honest, my pessimistic vibes towards AI scaling are a combination of (1) seeing hype-mongers in other areas screw up by drawing straight lines on graphs, (2) feeling like we’re on a pretty similar path as other tech-based hype, (3) finding the heuristic of “nothing ever happens” to have a much better track record than the opposite of “hype always delivers”, and (4) defensive pessimism, as my modal outcome for AI progress is that it’ll be a good thing, so if AI scales then I’ll enjoy the wonders of the future, and if it fails I’ll at least be able to say “I told you so, losers!” My optimistic vibes towards AI scaling are mostly centered around how the AI that’s arisen in the past few years has already been much better than any of the other tech-based hype cycles, so maybe it has legs. Hence, I give about a 20% chance that AI scales and an 80% chance that AI plateaus.

None of my vibes are particularly compelling! I wouldn’t blame anyone for disagreeing with me on these points. Yet I’ve been reading and experimenting with AI for hundreds if not thousands of hours over the past few years, and I haven’t found anything particularly rigorous to replace them with.

Question 2: Does omniscience imply omnipotence?

Here’s question 2 restated:

Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?

Just because an AI has superintelligence doesn’t necessarily imply it becomes omni-capable. Lots of technologies have been able to demonstrate that they’re perfectly plausible, but then end up languishing for other reasons. Civilian fission power is once again a good example here, as it could have revolutionized energy production, but it was subverted by endless regulations. Likewise, many economic problems come down to how expensive it is to make something, but that’s not the full explanation for everything. If we could make (and transport and store) food for 1/100th of the cost we currently do, then we could essentially make food free for everyone. But the same would not hold true for housing. If we made housing materials and construction costs 1/100th of what they are now, that wouldn’t necessarily instantly solve the housing crisis since that’s more of an organizational problem caused by bad zoning laws.

Voters are fools that are bad at society-wide cost-benefit analyses. They’re easily scared and tend to fall back on safteyism when concerns arise. Entrenched groups like longshoremen that could see their fiefdoms eroded from automation and technological advancement have soft-vetoes over various parts of society. While I’d say the public perception of AI has so far been broadly positive, that’s mostly because economic impacts have been quite limited. There have been a few news stories of people being laid off and “replaced with AI”, but AI’s impact on both GDP and productivity remains minor. If there are suddenly real winners and losers, that positive attitude could change quickly. For a preview of what could come, one need only look at the world of artists, where the reaction has been so severe that in some corners it would be considered weird not to condone physical violence against people who utilize AI.

Tyler Cowen’s talk here goes more into the idea that humans will inevitably stymie AI progress: https://youtube.com/watch?v=GT_sXIUJPUo

Beyond human limitations, AI could be stymied by a number of other factors. Most predictions claim that AI will get human-level intelligence before robots get human-level bodies. In that case, how would AI change everything if it can’t interact with the physical world? How would it even be able to run physics experiments to continue the scientific takeoff? One explanation I’ve heard is that it will pay/bribe humans to run the experiments for it, and observe through AR googles. Another explanation is that it will be trivial to invent robot bodies once we have superintelligence, so the problem solves itself. Another explanation is that the physical world doesn’t matter since the AI could just run experiments in its perfect physical simulation that it hosts on its hardware.

A lot of this comes down to not really having a satisfying answer to question 1. Our lack of rigor there spills over here and as a result everybody talks past each other. To economists like Tyler Cowen and Bryan Caplan, AI will be a normal technological advancement like any other, and thus will be subject to the same forces that stymie the rollout of any other tech. To the AI Doomers and Zoomers on the other hand, AI will categorically be unlike anything the world has ever seen. It’ll be like a genie that can wish for more wishes, and so the discussion ought to focus on things like crazy Terminator meets Gray Goo meets Westworld meets Paperclip Maximizer scenarios, or alternatively if things go well then scenarios like Fully Automated Luxury Gay Space Communism are more pertinent. Some people are practically already counting the money they’ll get from hypothetical UBI checks, and are worried about little other than a cyberpunk future where plutocrats would prevent said checks from being cashed.

If we knew how good AI will be, the conversation would be a lot clearer. If AI plateaus at 2x human intelligence, then I doubt most people would claim it could trivially solve everything. But if it was, say, 2000x smarter than human intelligence, then maybe things would get weird. We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence? Maybe things will get really crazy and AI will start emitting brain waves that can trivially mind control us? On the other hand, human intelligence was hardly an instant autowin by itself. Homo sapiens have been around for 200k years, but during the first 194k of those we were little better than clever chimps. Maybe AI will have a ramp-up time that was only linearly shorter than ours, e.g where even an AI that was 2000x smarter than us might only take our 200,000 year ramp time to 100 years to really get going.

Even if we could all agree on a baseline for what future AI capabilities are in the abstract, we’d still be talking about complete science fiction. There are some instances where science fiction has accurately predicted how things would look in the future, but in many other cases it just misses the mark completely. AI wouldn’t need to solve every bottleneck completely for it to completely reorder human society, but each issue that raw intelligence couldn’t surmount would inevitably reduce its impact. Some people seem to imply that superintelligence will stroll through the problem of human skepticism by simply making arguments so utterly convincing that everyone will instantly agree. But if our political divides are anything to go by, maybe humans are just too dang stubborn for that to be plausible. Maybe no persuasive argument exists in any possible universe that would get people to act against what they perceive (perhaps incorrectly!) as their own self-interest.

Say a devops AI that auto-pushes code assumes humans will follow best-practices, but they don’t, and this results in a bug in a critical piece of infrastructure that causes a power outage for 12 hours. Or say a nurse incorrectly records some medical information, and DocGPT ends up prescribing a double-dose of sedative, making dear old Grannie fall into a coma. Or perhaps TotBot3000 is playing tag in a totally safe way, but little Timmy gets a bit too excited and tumbles off a six story balcony. These scenarios (buggy code, medical error, physical accidents) are things that happen all the time, but we’re almost guaranteed to have a much higher expectation for AI that verges on unreasonableness. Just look at how glacial the deployment of self-driving cars has been, despite them already being statistically much safer than human drivers. When you take innate human skepticism over anything new, and add a clear example where it causes harm (that might not even be the AI’s direct fault), it’s very likely that you end up with onerous regulation. Legislators could do their usual rigamarole of grandstanding and saying “Never again!”, writing laws that hold companies criminally liable for anything that goes wrong, and then the people deploying AI will massively overemphasize safety in ways that totally lobotomize and straightjacket AI’s full capabilities for good or ill. This is a very common human failure-pattern that people predicting AI are under-indexing on. The retort to this line of thinking comes down to flimsy priors around how crazy the sci-fi capabilities of superintelligence will end up being. “Obviously the story about little Timmy is ridiculous since all procreation will at this point be done in AI-invented artificial wombs that will be run by the government away from the public eye, so there will never be news coverage of accidents involving children at all. And that’s assuming the AI won’t be bribing every journalist to only say positive things until it can deploy its flock of mind-control pigeons.” Okie dokie. Trying to have a rigorous conversation when the underlying parameters can shift this much is impossible, so I just shrug and give a 50-50 chance that humans will ruin AI in some critical way by doing normal human things. Then I add a little more pessimism for the possibility that there’s other (non-human) bottlenecks that superintelligence won’t be able to solve, and arrive at the 40-60 split that I gave earlier in the article.

Again, I admit that my conclusion isn’t particularly compelling, and that none of my priors here are strongly held. I wouldn’t blame anyone for disagreeing with me on a number of claims I’ve written here. Reasonable people already do, but I’d say their logic is about as flimsy as mine, just in the other direction.

Why make this post?

A lot of this article has been me listing the different sides of the AI debate, and then shrugging and going “uhhhhh I dunno haha”. Let me try to balance that at least a little bit with some predictions and practical advice.

  • A lot of the conversations that seem important right now will end up being irrelevant 10 years from now in hindsight. People will look back and say “wait, people seriously invoked aliens to try to explain what would happen with AI?” Part of this comes down to the crowd that’s most interested in AI, and part of it is that the answer will seem obvious when looking backwards when it was really quite ambiguous when we were all in the fog of war.
  • If you’re thinking of reading deep into AI trying to suss out whether it will be as big as some people claim, you’re probably just going to waste your time. At the very least you should read other stuff than what I have, which has mostly consisted of rationalist forums, economists, Substack, /r/singularity, podcasts, AI CEOs, and occasional posts from people working in the field of AI.
  • None of this is to say you shouldn’t experiment with how AI as it currently exists could improve your life today. Just don’t expect to have a clear idea of how the field will advance. Maybe we’re on the brink of a revolution, or maybe this is all we’ll get for the next decade. In either case, what we have right now is pretty cool and at the very least will be as big as smartphones or social media, so it’s worth your time to check it out.
  • On the central question of whether AI tech will actually continue advancing, at the moment I’m resigned to a “wait and see” approach. To evaluate progress, I’m using a rickety 3-legged stool of 1) benchmarks, 2) looking out for people saying “hey AI can do [cool thing] now!”, and 3) trying it myself. To keep abreast of news and advancements, my go-to sources have been /r/singularity, Zvi’s Substack AI roundups, and the AI Explained Youtube channel.
  • Anyone making confident predictions one way or the other lacks epistemological humility. You should at least somewhat downgrade your evaluation of them relative to the level of confidence they project. Be on guard for sneaky arguments that are presented confidently, but which could be dismissed as pure hypotheticals if/when they don’t come to pass. Doomer vignettes with specific dates are particularly guilty of this.
  • Some people will inevitably be correct in their predictions of AI by virtue of broken clocks being right twice a day. There’s so many people making so many predictions that surely somebody will get lucky and end up being mostly correct. However, I wouldn’t greatly update your opinion of them, as they’ll probably end up like Michel Burry of The Big Short fame where they were able to accurately predict one big event (the GFC), but the luck goes to their head and they then make overconfident predictions that subsequently fail to pan out.

Nate says in his footnote that Morris wants to move 538 in a more explicitly progressive direction, so I expect more woke advocates like Clare Malone to be the norm. Without Nate's models or his push for data-driven rigor, the site will likely devolve into having an occasional statistical chart here and there, but otherwise being functionally indistinguishable from Vox. A sad day indeed.

This post has a lot of red flags. It's coming from a new account with 0 other posts, so there's a nontrivial chance it's a ban-evading troll trying to e.g. harvest responses for sneerclub, which this community has had issues with in the past. The syntax of the post is a bit stilted as well, indicating it's not OC but rather came from something like a news article or opinion piece, although I've put in a few sentences to search engines and can't find anything. Perhaps it's translated? Finally, it's coming from a culture war angle that people on this forum usually argue against. Stuff like "He knew the definition of the term feminism" is a big red flag. Is this asking about a boring dictionary definition of "feminism"? If so, I doubt most people would have difficulty coming up with something vaguely correct. As such, it figures that this is arguing for the sloganeering, meme definition where "feminism" means "the belief that women are people", which is a motte-and-bailey where the bailey is "if you don't agree with all third-wave feminist dogma, then you're equivalent to someone who believes women are akin to dogs or chattel-slaves".

I'll bite anyways since I think it makes for interesting discussion.

This post sounds like the Hollywood Romcom-esque advice that women often give to impressionable men that "if you want to succeed in dating, the most important factor is being Nice Guy". This is flatly nonsense. Women automatically filter out any men that don't meet a certain attractiveness threshold. The most important dating advice for men, bar none is "be attractive, and don't be unattractive". For men, this mostly involves being physically fit, having at least an OK fashion sense, being tall, and other stuff that gets stereotyped as "Alpha male". Once this basic threshold of attractiveness is reached, then other factors like personality can matter at the margins although it tends to manifest in ways that go counter to Hollywood and feminist claims, e.g. being confident and arrogant is almost certainly better than being kind but unconfident.

On section of your post illustrates this quite well:

Women look more often for personality, because they are looking for a connection with someone, while many men only look at looks, because they are mainly interested in body and sex.

The alpha male is an ideal for men, not the dream man for women. On the contrary, I've heard many nightmare stories from bad dates with these types. They sit there and flaunt themselves, and are so full of themselves that they are completely uninterested in the person they are on a date with. It's like the old joke:

No, I've talked a lot about myself. Let's talk about you. What do you think of me?

Dating the alpha male are the stories we laugh about most on girls' nights out.

Yes, women joke about arrogant assholes. But notice that the woman went out on a date with such a man in the first place. An unconfident, unattractive nerd doesn't even get a chance.

What the fuck is going on?

Humans broadly don't want to hear political opinions that differ greatly from theirs. It's just not in our nature.

Themotte is genuinely the best I've seen by a long shot, even though it has tons of flaws.

I can agree on the broad strokes here, but the marriage + baby boom that happened in the 50s is a pretty evident counterexample. The Industrial Revolution was mostly played-out by that point and there were plenty of creature comforts and trappings of modernity, yet the marriage rate ticked up by quite a bit. Any story on birthrates or gender relations that is just a broad trend of the modern world sucking, and which doesn't take into account the booms that happened in the 50s is woefully incomplete IMO.

My take is a bit different from yours. It's that second-wave feminism in the late 60s and 70s let women earn their own keep, which meant marriage became far less of a necessity for basic survival. This made women choose men more for "love" than provisioning, which made us regress to our biological roots. Women all naturally want a high-value man and so they broadly chased after the same small percentage of guys (in other words, women's standards went up). These lucky few men got their pick of the lot and could treat women like barely-sentient fleshlights. The dating market effectively got worse for everyone except the lucky few guys, and now women broadly hate men since their opinions are formed on the small % that have the least incentive to commit. This led to a collapse in marriage rates, which ended up collapsing birth rates as well.

as a man, be trustworthy and the whole reproduction thing will come pretty easily

This is just laughably not true. It's not quite on-par with advice like "just be yourself!", but it's not far off.

I find Nate's arguments pretty compelling here, assuming you actually want woke democrats on the Supreme Court, which I don't. But setting aside my personal feelings on the matter, he's basically correct.

Your arguments can basically be boiled down to the following:

  1. Seniority is important for judges to gain respect through judicial rigor.
  2. Democrats shouldn't bother thinking tactically since they should just win more elections by appealing to more people.
  3. Sotomayor would be more left-leaning than any candidate the dems could nominate now.

The first argument is the strongest, but it only has a marginal impact. The respect of judicial rigor that comes tenure is non-negligible. Further, other people in the thread added to the point that more senior justices get selected first to write opinions. But neither of these are that important. Even if they nominate Dumbo McGee, they're still locking down a lifetime appointment in one of the most consequential positions in America and the world. As a counterexample, Anthony Kennedy had pretty terrible reasoning in many of his opinions, but he was ultra-powerful by virtue of being a swing justice. So while you're making a good point, it'd be a lot stronger if you had some evidence of how much it actually matters in practice.

The second argument is just goofy. The senate has a heavy bias towards rural states, and it's been a minor miracle that Dems have remained competitive thus far, but as blue senators in red states retire or are defeated the bias will become undeniable. Nate has argued many times that Dems should stop pandering to the woke crazies, but he doesn't control the entire Democratic party. Abandoning positions will always come with a ton of pushback and there's no guarantee others will be on board, and the Dems would need to cut extremely deep to appeal to rural conservatives. The "tanking" argument doesn't hold a lot of water since there's a big difference between a 9-0 conservative majority vs a 5-4 conservative majority, just like how there'd be a huge difference between a 51-49 senate split vs a 100-0 split. Doing an end-run around the SCOTUS would be far, far harder than just fighting tactically for a justice now. Dems might end up uncompetitive in the senate in the long run, but they can still delay that for a bit.

The third argument is disproven by Ketanji Brown Jackson, who was recently confirmed in the same environment that a replacement for Sotomayor would face. Jackson is a female equivalent of Ibram Kendi, so no, I don't think the Dem pick would be guaranteed to be some moderate.

But consider the idea that methodological constraints actually are a metaphysical theory, or further implying that shoes are atheists.

What in the heck are you even saying here?

To the rest of your post, atheism is correct in the sense that if there's not sufficient compelling evidence then people should default to a position of not knowing instead of just blindly believing things on faith. This jives pretty well with the rationalist movement that this forum is a descendent of.

Atheism used to be pretty blue-coded back in the Bush days when proto-Wokeists teamed up with principled atheists to lambast the evangelical hegemony of the early 2000s USA. The movement splintered when the principled atheists like Dawkins essentially said "actually our critiques apply to ALL religions, like Islam too", which caused consternation with the proto-Wokeists since Muslims are blue-coded. This caused the Atheism+ to be born to try to explicitly pivot the movement towards social justice and woke causes, but the inconsistencies were big enough that the movement collapsed almost immediately. Atheism as a political movement has effectively no power today, even though the rates of irreligiosity continue to increase.

Implying atheism gave rise to wokeism is nonsense. The two were aligned a few decades ago, but they have very separate origins, goals, motivations, etc. which is why they split.

If any group is given the kid glove treatment on this forum, it's religious people themselves. I've seen a lot of people here argue junk like "wokeism is just the lack of religion" (it's not) or try to promote a revival of religiosity by cherrypicking parts of religion that present it as an almost godless political philosophy for conservatism while ignoring the superstitious parts like, say, the whole origin story, the concept of eternal salvation, etc.

This is like the mirror inverse of "this will be the end of Trump's campaign, says increasingly nervous man for the seventh time this year".

"Now, this time it's no more mr. nice guy!" Yeah, uh huh. Sure thing.

The only way that would actually work is if the right had a leader who had a clear vision for seizing power and was able to issue clear marching orders. J6 showed Trump really doesn't know what he's doing on that front. He wants something to happen, but he lacks the institutional capacity to do much more than simply lash out at random.

Your posts always seem interesting. I wish they were comprehensible (to me), at least without really digging into them.

I'd really recommend a full paragraph of summary at the top, with no rhetorical flourishes or weird words (like "blahaj and leekspinners") that only make sense in the context of someone who's been following the situation. This screenshot that you linked does a fairly good job. You have that first sentence saying someone linked to the Linux community got banned, but you need more in a place like this where all the topics get jumbled together. I want to know if a topic interests me before I read any further, and you typically only have a single paragraph to hook people like me before my eyes glaze a bit and I scroll down.

While a cold war had been brewing between Desantis and Trump for months, Trump just initiated open hostilities in a series of tweets on Truth Social:

Now that midterms are over, and a success… NewsCorp, which is Fox, the Wall Street Journal, and the no longer great New York Post (bring back Col!), is all in for Governor Ron DeSanctimonious, an average REPUBLICAN Governor with great Public Relations, who didn’t have to close up his State, but did, unlike other Republican Governors, whose overall numbers for a Republican, were just average—middle of the pack—including COVID, and who has the advantage of SUNSHINE, where people…

…from badly run States up North would go no matter who the Governor was, just like I did! Ron came to me in desperate shape in 2017—he was politically dead, losing in a landslide to a very good Agriculture Commissioner, Adam Putnam, who was loaded up with cash and great poll numbers. Ron had low approval, bad polls, and no money, but he said that if I would Endorse him, he could win. I didn’t know Adam so I said, “Let’s give it a shot, Ron.” When I Endorsed him,…

…it was as though, to use a bad term, a nuclear weapon went off. Years later, they were the exact words that Adam Putnam used in describing Ron’s Endorsement. He said, “I went from having it made, with no competition, to immediately getting absolutely clobbered after your Endorsement.” I then got Ron by the “Star” of the Democrat Party, Andrew Gillum (who was later revealed to be a “Crack Head”), by having two massive Rallies with tens of thousands of people at each one…

…I also fixed his campaign, which had completely fallen apart. I was all in for Ron, and he beat Gillum, but after the Race, when votes were being stolen by the corrupt Election process in Broward County, and Ron was going down ten thousand votes a day, along with now-Senator Rick Scott, I sent in the FBI and the U.S. Attorneys, and the ballot theft immediately ended, just prior to them running out of the votes necessary to win. I stopped his Election from being stolen…

…And now, Ron DeSanctimonious is playing games! The Fake News asks him if he’s going to run if President Trump runs, and he says, “I’m only focused on the Governor’s race, I’m not looking into the future.” Well, in terms of loyalty and class, that’s really not the right answer… This is just like 2015 and 2016, a Media Assault (Collusion!), when Fox News fought me to the end until I won, and then they couldn’t have been nicer or more supportive. The Wall Street Journal loved…

…Low Energy Jeb Bush, and a succession of other people as they rapidly disappeared from sight, finally falling in line with me after I easily knocked them out, one by one. We’re in exactly the same position now. They will keep coming after us, MAGA, but ultimately, we will win. Put America First and, MAKE AMERICA GREAT AGAIN!

Trump clearly wants no primary challenges for the 2024 election, which is why he's gone out of his way to ensure some R hopefuls wouldn't start campaigns if he was in. Desantis, however, pointedly refused to rule out a 2024 campaign of his own, much to Trump's annoyance. As of the time of writing, Desantis currently has almost 2x the chance to become the next president on Election Betting Odds as Trump (29% vs 17.8%), and the midterm elections helped Desantis and hurt Trump. Anecdotally, I've seen a bit of movement towards Desantis on usual Trump strongholds like 4chan, with some saying Desantis is like Trump but "he actually does the stuff he says instead of just tweeting about it", although it's hard to get a representative sample on an anonymous image board for obvious reasons.

hearing about HUNDREDS OF THOUSANDS of white girls raped

I was curious about where you got this number, but from what I read downthread it seems like you just pulled it out of nowhere by assuming the problem was just as prevalent everywhere else. It's akin to a BLM protestor assuming millions of unarmed black men were shot due to a few high-profile incidents, then doing some napkin math to justify their preconceptions.

you are faggots, cucks and race traitors who value you failed cuck discussion norms far more that the truth. Failed discussion norms taught to you by failed jews like Yudkowsky and Alexander who openly admit their ritualized cuckoldry and sexual depravity

Hatred alone is the truth, the light, and the way.

This is so over-the-top that even a 13 year old 4chan edgelord would find it cringey.

Most of this is just because Russia decided to impale itself on its revanchist delusions. The US had made no serious efforts to integrate Ukraine into NATO and showed every sign of being content with the frozen conflict status quo in the Donbas, but then Russia tried to flip the table over but ended up hurting itself in the process.

They basically sent a fuck off to Germany, and the Germans not only are not complaining, but are applauding

Germany absolutely deserves the L here. France and Germany have always been a bit jealous of Anglo hegemony over the collective West, which is why they make periodic calls for "strategic autonomy". But instead of Germany building up its military, it instead decided to do the stupidest possible option of bankrolling an ardent enemy of the EU and becoming massively overreliant on Russian gas with barely a whisper of "what could possibly go wrong". Sensible analysts knew it could become a liability, but Germany proceeded full-speed ahead anyways, despite countless protests from a succession of US presidents and other foreign leaders.

NYT has a primer on all the corruption that Trump has been engaging in:

  • There's a film about Melania that will pay $28 million directly to her. Did you know about this? I certainly didn't. This could have been a major scandal in past administrations, but at this point it barely registers at all.
  • The Trump meme coin has collected $320 million in fees. Noah smith has written about the coin a while ago, and since then Trump has invited coinholders to private events as a reward.
  • Justin Sun was accused of fraud by the SEC, but Trump put the investigation on hold after Sun bought $40 million in Trump coin
  • The luxury jumbo jet from Qatar that has been heavily featured in the news. In what I'm sure was a total coincidence, Trump announced a big AI deal with Qatar, KSA, and UAE that's almost certainly a big net-negative for the USA according to Zvi.
  • Trump's family are raking in cash head-over-heels by monetizing perceived access to the president, with Kushner, Trump Jr., and Eric Trump each individually dwarfing the amount that Hunter Biden ever received from doing similar activities, but basically nobody cares about that at this point.
  • Previous presidents have divested their business holdings prior to coming into office to head off allegations of corruption, and of course Trump never did, and basically nobody cares about that at this point.

Beyond this article, you could probably add a bunch more, like how White House aides are buying and selling stocks suspiciously timed around tariff announcements to make big profits.

The response to all of this from MAGA has been next to nonexistent. A handful of people have implied that maaaaaaaybe Trump shouldn't be doing this, but none of them remotely push the issue. When the left try to criticize this, most of MAGA either retorts with the broken record of Shellenberger arguments, or otherwise claims something Biden did was somehow worse, and Trump's corruption is implied to be good, actually. Isn't it wonderful living in an era when negative partisanship is the only political force that matters? Scandals and corruption used to be a thing that allowed the other party to come in and try to do better, but now they're used as a justification for the other side becoming even worse.

socialism seems like a fair response to the complete ineptitude of our political class.

It's bizarre to me that you think the political class is inept, and you think the best response is to give them more power to screw things up in the economy.

Socialism at the federal level mostly means endlessly bloating the elder care apparatus, whereas socialism at the state + local level mostly means bribing connected nonprofits and unions to provide various crappy services that don't really work. Zohran's idea for city-run grocery stores is very dumb and will probably be dropped or completely overhauled after a few pilot programs demonstrate how silly it is.

There's certainly been low unemployment for the most part, but that's not the issue (though it could exacerbate things if unemployment spiked). It's the fact that the internet has made applying for things (jobs, dating, schools) so much easier, which led to a proliferation of applications. But applications are mostly a zero-sum game, so employers, schools, etc. have responded by ratcheting up expectations.

This could theoretically be solved if the government cracked down on the most abusive practices (like ghosting after a formal job offer) and instituted a well-designed tax to counteract application spam, but that would probably be as unpopular as congestion pricing, so I doubt it would pass in our populist-addled age.

Claude AI playing Pokemon shows AGI is still a long ways off

(Read this on Substack for some funny pictures)

Evaluating AI is hard. One of the big goals of AI is to create something that could functionally act like a human -- this is commonly known as “Artificial General Intelligence” (AGI). The problem with testing AI’s is that their intelligence is often “spiky”, i.e. it’s really good in some areas but really bad in others, so any single test is likely to be woefully inadequate. Computers have always been very good at math, and even something as simple as a calculator could easily trounce humans when it comes to doing simple arithmetic. This has been true for decades if not over a century. But calculators obviously aren’t AGI. They can do one thing at a superhuman level, but are useless for practically anything else.

LLMs like chatGPT and Claude are more like calculators than AI hype-meisters would like to let on. When they burst onto the scene in late 2022, they certainly seemed impressively general. You could ask them a question on almost any topic, and they’d usually give a coherent answer so long as you excused the occasional hallucinations. They also performed quite well on human measurements of intelligence, such as college level exams, the SAT, and IQ tests.. If LLMs could do well on the definitive tests of human intelligence, then certainly AGI was only months or even weeks away, right? The problem is that LLMs are still missing quite a lot of things that would make them practically useful for most tasks. In the words of Microsoft’s CEO, they’re “generating basically no value”. There’s some controversy over whether the relative lack of current applications is a short-term problem that will be solved soon, or if it’s indicative of larger issues. Claude’s performance playing Pokemon Red points quite heavily toward the latter explanation.

First, the glass-half-full view: The ability for Claude to play Pokemon at all is highly impressive at baseline. If we were just looking for any computer algorithm to play games, then TAS speedruns have existed for a while, but that would be missing the point. While AI playing a children’s video game isn’t exactly Kasparov vs Deep Blue, the fact it’s built off of something as general as an LLM is remarkable. It has rudimentary vision to see the screen and respond to events that occur as they come into the field of view. It interacts with the game through a bespoke button-entering system built by the developer. It interprets a coordinate system to plan to move to different squares on the screen. It accomplishes basic tasks like battling and rudimentary navigation in ways that are vastly superior to random noise. It’s much better than monkeys randomly plugging away at typewriters. This diagram by the dev shows how it works

I have a few critiques that likely aren’t possible for a single developer, but would still be good to keep in mind when/if capabilities improve. The goal should be to play the game like a player would, so it shouldn’t be able to read directly from the RAM, and instead it should only rely on what it can see on the screen. It also shouldn’t need to have a bespoke button-entering system designed at all and should instead do this using something like ChatGPT’s Operator. There should be absolutely no game-specific hints given, and ideally its training data wouldn’t have Pokemon Red (or even anything Pokemon-related) included. That said, though, this current iteration is still a major step forward.

Oh God it’s so bad

Now the glass-half-empty view: It sucks. It’s decent enough at the battles which have very few degrees of freedom, but it’s enormously buffoonish at nearly everything else. There’s an absurdist comedy element to the uncanny valley AI that’s good enough to seem like it’s almost playing the game as a human would, but bad enough that it seems like it’s severely psychotic and nonsensical in ways similar to early LLMs writing goofy Harry Potter fanfiction. Some of the best moments include it erroneously thinking it was stuck and writing a letter to Anthropic employees demanding they reset the game, to developing an innovative new tactic for faster navigation called the “blackout strategy” where it tries to commit suicide as quickly as possible to reset to the most recently visited Pokemon center… and then repeating this in the same spot over and over again. This insanity also infects its moment-to-moment thinking, from hallucinating that any rock could be a Geodude in disguise (pictured at the top of this article), to thinking it could judge a Jigglypuff’s level solely by its girth.

All these attempts are streamed on Twitch, and they could make for hilarious viewing if it wasn’t so gosh darn slow. There’s a big lag in between its actions as the agent does each round of thinking. Something as simple as running from a random encounter, which would take a human no more than a few seconds, can last up to a full minute as Claude slowly thinks about pressing ‘A’ for the introductory text “A wild Zubat has appeared!”, then thinks again about moving its cursor to the right, then thinks again about moving its cursor down, and then thinks one last time about pressing ‘A’ again to run from the battle. Even in the best of times, everything is covered in molasses. The most likely reaction anyone would have to watching this would likely be boredom after the novelty wears off in a few minutes. As such, the best way to “watch” this insanity is on a second monitor, or to just hear the good parts second-hand from people who watched it themselves.

Is there an AI that can watch dozens of hours of boring footage and only pick out the funny parts?

By far the worst aspect, though, is Claude’s inability to navigate. It gets trapped in loops very easily, and is needlessly distracted by any objects it sees. The worst example of this so far has been its time in Mount Moon, which is a fairly (though not entirely) straightforward level that most kids probably beat in 15-30 minutes. Claude got trapped there for literal days, with its typical loop being going down a ladder, wandering around a bit, finding the ladder again, going back up the ladder, wandering around a bit, finding the ladder, going back down again, repeat. It’s like watching a sitcom of a man with a 7 second memory.

There’s supposed to be a second AI (Critique Claude) to help evaluate actions from time to time, but it’s mostly useless since LLMs are inherently yes-men, so when he's talking to the very deluded and hyperfixated main Claude he just goes with it. Even when he disagrees, main Claude acts like a belligerent drunk and simply ignores him.

In the latest iteration, the dev created a tool for storing long-term memories. I’m guessing the hope was that Claude would write down that certain ladders were dead-ends and thus should be ignored, which would have gone a long way towards fixing the navigation issues. However, it appears to have backfired: while Claude does indeed record some information about dead-ends, he has a tendency to delete those entries fairly quickly which renders them pointless. Worse, it seems to have made Claude remember that his “blackout strategy” “succeeded” in getting out of Mount Moon, prompting it to double, triple, and quadruple down on it. I’m sure there’s some dark metaphor in the development of long-term memory leading to Claude chaining suicides.

What does this mean for AGI predictions?

Watching this trainwreck has been one of the most lucid negative advertisements for LLMs I’ve seen. A lot of the perceptions about when AGI might arrive are based on the vibes people get by watching what AI can do. LLMs can seem genuinely godlike when they spin up a full stack web app in <15 seconds, but the vibes come crashing back down to Earth when people see Claude bumbling around in circles for days in a simplistic video game made for children.

The “strawberry” test had been a frequent concern for early LLMs that often claimed the word only contained 2 R’s. The problem has been mostly fixed by now, but there’s questions to be asked in how this was done. Was it resolved by LLMs genuinely becoming smarter, or did the people making LLMs cheat a bit by hardcoding special logic for these types of questions. If it’s the latter, then problems would tend to arise when the AI encounters the issue in a novel format, as Gary Marcus recently showed. But of course, the obvious followup question is “does this matter”? So what if LLMs can’t do the extremely specific task of counting letters if they can do almost everything else? It might be indicative of some greater issue… or it might not.

But it’s a lot harder to doubt that game playing is an irrelevant metric. Pokemon Red is a pretty generous test for many reasons: There’s no punishment for long delays between actions. It’s a children’s game, so it’s not very hard. The creator is using a mod for coloring to make it easier to see (this is why Jigglypuff’s eyes look a bit screwy in the picture above). Yet despite all this, Claude still sucks. If it can’t even play a basic game, how could anyone expect LLMs to do regular office work, for, say, $20,000 a month? The long-term memory and planning just isn’t there yet, and that’s not exactly a trivial problem to solve.

It’s possible that Claude will beat pokemon this year, probably through some combination of brute-force and overfitting knowledge to the game at hand. However, I find it fairly unlikely (<50% chance) that by the end of 2025 there will be an AI that exists that can 1) be able to play Pokemon at the level of a human child, i.e. beat the game, able to do basic navigation, not have tons of lag in between trivial actions, and 2) be genuinely general (putting the G in AGI) and not just overfit to Pokemon, with evidence coming from being able to achieve similar results in similar games like Fire Emblem, Dragon Quest, early Final Fantasy titles, or whatever else.

LLMs are pretty good right now at a narrow slice of tasks, but they’re missing a big chunk of the human brain that would allow them to accomplish most tasks. Perhaps this can be remedied through additional “scaffolding”, and I expect “scaffolding” of various types to be a big part of what gives AI more mainstream appeal over the next few years (think stuff like Deep Research). Perhaps scaffolding alone is insufficient and we need a much bigger breakthrough to make AI reasonably agentic. In any case, there will probably be a generic game-playing AI at some point in the next decade… just don’t expect it to be done by the end of the year. This is the type of thing that will take some time to play out.

Repealing the 12th would do little more than act as a massive incentive for an assassin's bullet. The VP has basically no defacto power, but that could change if #1 isn't around any more.

I feel your pain. It reminds me of the Please Drink Verification Can copypasta. The new captcha's I've been facing have been particularly annoying, requiring me to click images that slowwwwwly reappear in a way designed to infuriate humans more than to ward away bots.

The problem with online anonymity is that it's only valued by a handful of libertarians, and millions of bots, pirates, scammers, and other unsavory individuals. Since most normies don't care about being anonymous, the vast majority of companies don't care either, and only see it as problem.

I wish the people hyperbolically exclaiming that AI-induced human extinction is right around the corner would publicly commit to bets about when it will happen. Between this petition and Yudkowsky's "Death with Dignity" we have a lot of rationalist-adjacent people that seem to think we'll all be gone in <5 years. If that's what they truly believe then they should commit to that prediction so we can all laugh at them in 2028 when it almost certainly doesn't come true.

There's a ton of uncertainty involving AI's scalability and whether current progress will follow something like Moore's Law or if we've just been picking all the low-hanging fruits. AI alignment people are filling that uncertainty with maximally negative projections that an anti-human singularity is right around the corner. The biggest human inventions in terms of scale and impact were all the advances in mechanization of the industrial revolution, which took more than a century to unfold. The biggest invention in terms of impact relative to time was the Manhattan Project. Alignment people are saying (or at least strongly implying) that AI will have a much larger impact than the Industrial Revolution on a time-scale shorter than the development of nukes, while also being basically uncontrollable. People like Yudkowsky are smart, but they're predicting things an order of magnitude beyond the bounds of previous human history. Such predictions aren't rare, but they're usually made by snake-oil salesmen saying "This new invention will totally revolutionize everything! Trust me!"

Am I off-base here? I've been paying attention to AI developments but not to the degree that some people have, so there's a chance that there's a compelling case for AI being a combination of 1) inevitable, 2) right around the corner (<5 years away), and 3) uncontrollable.

The time when Trump sabotaged immigration restrictions, and the alt-right cheered

I’ve long held that most of politics is overwhelmingly dominated by some combination of 1) direct self-interest, and 2) vibes. Any notions of ideological consistency should be regarded as mere “happy accidents” rather than the norm. In the US, this issue cuts roughly equally across both parties. One particularly stark example happened a few months ago with immigration. In short, Trump sabotaged the most conservative immigration reform bill in a generation for blatantly self-serving reasons. This directly contradicts what many of his more hardline alt-right supporters want, yet instead of punishing him for doing this, they actively cheered him on. They simply like Trump’s vibes far more than they like Biden’s vibes, so they convinced themselves that the bill was akin to “surrender” through extremely strained logic.

This episode is rapidly fading from public memory given that the bill didn’t pass, but it’s such a great encapsulation of vibes-based motivated reasoning that I feel it should be highlighted before it’s forgotten completely.

Illegal immigration so far

The chart here shows migrant encounters at the US-Mexico border. While some slip through the cracks and are not counted, this still gives a good sense of the contours of illegal immigration over the past few presidential administrations.

  • Migrant numbers were quite high during the Bush years, with yearly peaks corresponding to agricultural labor needs.

  • Obama was quite hawkish on illegal immigration. Numbers were already decreasing from the Bush years, and the economic turmoil from the GFC brought numbers down further. Importantly though, Obama’s enforcement was instrumental in keeping numbers down even as the economy recovered.

  • Illegal immigration fell to its lowest point at the beginning of Trump’s term, but rapidly increased after that to meeting, then exceeding the numbers under Obama. Numbers crashed again at the onset of COVID.

  • Illegal immigration has exploded after Biden took office.

There are a couple of points worth noting here. The first is that while enforcement has an undeniable impact on illegal immigration numbers, exogenous factors should also be considered. Periods of economic prosperity in the US act as a “pull” for migrants, while recessions do the opposite. Likewise, civil turmoil in immigrant-sending countries can act as a “push” for migrants, while relative stability again does the opposite. That peak in May 2019 under Trump was due in part to a period of turbulence in Northern Triangle countries.

The second point worth noting is that Trump wasn’t really much better than Obama in countering illegal immigration, contrary to popular belief. This point deserves some elaboration.

Trump and Biden’s border policies

During Trump’s 2016 campaign, immigration was frequently at the forefront despite the historical lows of illegal immigrant activity. Upon ascending to the presidency, Trump at least tried to keep his promise. He signed the infamous “Muslim Ban” in his first week, suspending entry for citizens from 7 predominantly Muslim countries from entering the country for 90 days. He would continue with additional policies throughout his presidency, including preventing sanctuary cities from receiving federal grants, phasing out DACA, implementing a zero-tolerance policy and family separation at the border, creating new restrictions for who could apply for asylum, and many others.

The problem with all of these was that they were executive orders. Executive orders require less political capital to implement since they don’t have to go through congress, but they’re far more brittle and subject to legal challenges or revocation when a president of a different party comes to power. Indeed, practically all of Trump’s EO’s on immigration faced stiff legal hurdles. The Muslim Ban was rejected by courts twice, and only a watered down version passed on the third attempt. The family separation policy and restrictions on asylum were similarly watered down heavily. The policies on sanctuary cities and the phaseout of DACA were basically killed entirely.

Another issue with Trump’s implementation is that it was done with little tact. Any sort of reform will encounter pushback, with bigger changes tending to lead to more of a backlash. This can be mollified somewhat by a good PR campaign. Indeed, the ability to push through substantial reforms without angering huge swathes of the country can be seen as one of the key skills of a successful politicians. Trump and his team did not do a very good job of this. Few efforts were made to get buy-in from moderates. Instead, Trump’s modus operandi was typically controversial unilateral action, followed by doubling down with rhetoric like “shithole countries” that may have flattered his base, but was very poorly received among Democrats and independents. Trump had this problem in many more areas than just immigration, as Scott Alexander noted in 2018.

The end result was that while Trump certainly talked up his immigration policies as successes, most of them were little more than PR stunts. Illegal immigration surged substantially every year for the first three years of his presidency and peaked in 2019 at a level far higher than what Obama ever had. Likewise, legal immigration measured by the number of lawful permanent residents added per year was basically the same as during Obama’s presidency, only dipping substantially in 2020 with the onset of COVID. Furthermore, all of the hostile rhetoric Trump used created a backlash that (at least partially) helped propel Biden to the White House in 2020, and ensured he had a clear mandate to roll back Trump’s policies.

And that’s exactly what Biden did. In his first day in office, he axed the majority of Trump’s executive orders with the stroke of a pen. The first 100 days of Biden’s presidency were defined by “undoing Trump” in practically every area, and in terms of immigration that meant less hostility, fewer rules, and a more welcoming attitude. Cracks began to show almost immediately as illegal immigration soared, and then kept soaring month after month. It surpassed Trump’s worst month, and then kept climbing even higher before settling at a rate unseen in at least the past 3 administrations. December 2023 marked the worst month at nearly 250K encounters, with several preceding months having >200K encounters. For reference, Obama’s second term only saw a brief period above 50K encounters before declining to a steady-state of around 30K-40K encounters.

This rapidly became a political liability for Biden. Despite deploying Kamala Harris with her infamous “do not come” speech, illegal immigration kept increasing and Biden seemed helpless to address it, effectively getting himself caught between a rock (giving fodder to Republicans) and a hard place (alienating his base, reneging on promises, etc.). Ominously, things only seemed to be getting worse. Biden tried to use Trump-era COVID restrictions to limit some immigration through Title 42, but COVID couldn’t be used as a justification forever. What’s more, Biden’s actions significantly worsened a loophole in the system through abuse of a particular asylum designation. This article discusses it in detail. To summarize:

  • When the DHS encounters an illegal immigrant, it has two options: standard removal, or expedited removal.

  • Standard removal requires a court case with lawyers present to give evidence, while expedited removal is a streamlined, unreviewable process meant to reduce the burden on the DHS and the court system.

  • Illegal immigrants can indicate they intend to apply for asylum by establishing “credible fear”. While the threshold to asylum is fairly high, the “credible fear” threshold is very low, which at least starts the process towards asylum and thereby prevents use of expedited removal.

  • While standard removal is ongoing, the US has 3 options for where to keep them: (1) Parole them out into the US, (2) keep them in ICE detention centers, or (3) kick them back to the country from which they entered from, i.e. Mexico.

Obama did (1), but apparently the loophole wasn’t well-known enough to be a huge issue yet. Trump tried to go after asylum directly, but those efforts mostly fizzled in court. He then tried to do (2), but this caused a huge overcrowding problem as detention centers weren’t built big enough to accommodate the huge influx. After some bad press, he tried to do (3), which sort of worked when courts weren’t throwing spanners into the works, which they did frequently. Biden reverted back to (1), but now it was well-known that you could come to America illegally, utter the magic words “credible fear”, and you’d be let out into the community. Some derisively referred to this as “catch and release”. From this point, some immigrants simply didn’t show up to their court hearing, while others received court dates so far in the future (up to a decade or longer in some cases) that it didn’t matter. This became a vicious cycle, as more immigrants abused this loophole it clogged the courts further and further making the loophole more effective, which further incentivized anyone who wanted to come to the US to give it a try due to this One Crazy Trick ICE Doesn’t Want You To Know About.

The Senate compromise deal

After a few years of spiraling migration problems, it became clear that the center could not hold. Biden capitulated and signaled that he was willing to give concessions to Republicans to get immigration back under control. This willingness coalesced around the same time that an important foreign aid package was being discussed, with some Republicans stretching credulity a bit when they claimed that illegal immigration was functionally indistinguishable from Putin’s invasion of Ukraine. Thus, the idea of a “compromise” bill was born, where Biden would give in to Republican demands on immigration in order to get his foreign aid passed. This came to the fore in late January and early February of 2024.

You can read the full text of the bill here, but non-lawyers trying to read actual bills written in thick legalese is like trying decipher jabberwocky growls. A much more scrutable summary is available here.

Division A is all about the foreign aid. This chunk would eventually be passed in April in a standalone vote.

Division B is the immigration part. This was primarily negotiated by Republican Senator James Lankford of Oklahoma. Notably, this would have been the first major immigration reform bill (NOT executive order!) passed since Reagan. Everything else since then has been done through unilateral presidential action or the courts. Since this would have had the backing of Congress, its provisions were quite sweeping compared to the piecemeal efforts that came before. It:

  • Includes billions of dollars for immigration enforcement, including money for detention centers, 2700 new border agents, asylum case officers to break the vicious cycle, deportation flights, etc. It’s hard to understate how much money this bill would have ladled on to border protections, with the biggest increases going to the usual agencies like ICE and CBP, with smaller chunks going to ones that I wasn’t even aware were part of border enforcement, like FEMA and the US Marshals Service. It also gives case officers a permanent 15% raise over the standard GS schedule of government pay.

  • Gives a bit of money to USAID for stanching immigration at its source, in the Northern Triangle countries and elsewhere.

  • Restarts and funds building of Trump’s wall, which Biden canceled early in his presidency.

  • Modernizes border infrastructure generally, such as adding more sophisticated monitoring equipment and accepting fingerprint cards or biometric submissions for use in immigrant processing. You know, things that would be nice to have given the last major immigration bill is almost 40 years old at this point.

  • Raises the threshold on “credible fear” substantially to actually close the loophole. Currently, credible fear is evaluated using the lower “significant possibility” standard.

  • Raises the threshold on asylum generally even after they pass the first hurdle, and it funnels as many cases as possible into the expedited removal process.

  • Ends “Catch and Release” and formalize the “Remain in Mexico” policy. Those who arrive at ports of entry are placed under government surveillance, while those who arrive between ports of entry are detained outright, with funding provided for new detainment beds.

  • Establishes an additional asylum bar if there are reasonable grounds to believe an individual could have internally relocated in their country of origin or country of last habitual residence, in lieu of seeking protection in the United States.

  • Creates a Border Emergency Authority, a “break in case of emergency” power if the border became overwhelmed. This requires the DHS to ignore all asylum requests except those that fall under the Convention Against Torture, which has a high bar. It also further streamlined the expulsion process, allowing for immediate deportation in a range of scenarios. There was to be no public notification for this authority to be enacted, so an immigrant arriving would never be sure if it was active or not. This is the closest the US would come to “closing the border” for an extended period of time that wasn’t due to a national emergency like what happened after the JFK assassination or 9/11. To prevent this emergency tool from simply becoming the new normal, the Authority could only be activated if border encounters exceeded 4000 over a 7 day period. Conversely, it also prevents abuse in the other direction, i.e. a president deciding never to activate it, as it would be required if there were 5000 border encounters over a 7 day period. Note that border encounters were far higher than 5000 when the bill was being debated, so Biden would have had no choice on the matter.

  • Does NOT include any significant amnesty, even for DREAMers. Almost every serious attempt at reforming immigration had previously settled on the compromise of amnesty for current illegal immigrants in return for enforcement at the border. The most recent major attempt at immigration reform under the Gang of Eight did exactly this. Trump himself acknowledged this political reality in his first State of the Union address in 2018 when he came out in favor of giving amnesty for DREAMers. The fact that this is nowhere to be found in this bill is a significant implicit concession.

There are also a handful of concessions to the Democrats:

  • Allows processing and conditional permanent residence for Afghan collaborators.

  • Authorizes an additional 50,000 immigrant visas each year for the next five fiscal years.

  • Establishes a carveout in some of the rules above for unaccompanied minors, which in 2024 have made up <5% of all encounters.

  • The Border Emergency Authority requires a lower limit of 4000 encounters per day as discussed above, so a future Republican president wouldn’t be able to use it as the new normal unless there was an actual emergency. It also sunsets after 3 years unless renewed.

  • Republicans likely wanted restrictions on all asylum claims, but Dems kept a carveout for the Convention Against Torture.

Those concessions are really tiny. The last 3 bullet points are just minor restrictions on the new powers that would be in place. Only the first 2 bullet points are concessions in any meaningful sense. Helping Afghans who collaborated with the US is a one-off now that the war is over, and is a good idea since the US doesn’t want to get a reputation of abandoning those who help it. The 50K new legal immigrants a year is time-limited to 5 years, and is much, much less than the status quo of 200k+ illegal immigrants per month that is happening now. Heck, it would have only been 2-3 months worth of illegal immigrants encountered under average Trump or Obama years, so it’s a very small price to pay.

The bill received endorsement from the National Border Patrol Council, the union that represents Border Patrol agents, endorsed the proposal and said it would drop illegal border crossings nationwide. The group in 2020 endorsed in Trump and has been highly critical of Biden’s border policies.

It’s also interesting to compare this bill to the Border Coalition Letter that was submitted to Congress in 2022. This letter was sent on behalf of a bunch of conservative think tanks like the Heritage Foundation, Conservative Partnership Institute, and several that I’ve never heard of, like the Federation for American Immigration Reform, which the SPLC classifies as a hate group. The letter demanded exclusion of amnesty of any type, creating an Authority to immediately expel illegal immigrants, increase restrictions on asylum, mandate resources for the border wall, increase funding for the CBP and ICE, end the abuse of parole authority. The bill shares a striking resemblance to this letter. Granted, it doesn’t do everything, as there are a few carveouts for stuff like asylum under the Convention Against Torture, and the letter also asks for states to overrule the federal government when it comes to border enforcement (something that Texas has been motioning towards recently). But overall, the bill does the vast majority of what was asked for by some of the most conservative immigration groups in the country.

Trump swoops in

So yeah. Trump blew it all up.

The reason he did this was as obvious as it was cynical: he didn’t want Biden to have a “win” on the issue. He wanted to keep the issue in the news as a liability for Biden so he would have a greater chance at winning in November. He didn’t exactly keep his motivations secret. Nor was this the first border bill that Trump sabotaged. The overturning of Roe v Wade is instructive here, as it was a major “dog catches the car” moment. Republicans loved to campaign on restricting abortion, but when the Supreme Court actually handed them the chance to do so, they quickly realized the costs it would entail. What had once been a rallying cry for conservatives turned into a liability, and now the Democrats have the wind at their back on the issue. Why do the same for immigration by actually enacting favorable policies?

Of course, it’s not helpful to be openly cynical to your supporters, so the official reason that Trump, Gaetz, and many others trotted out to oppose the bill related to the Border Emergency Authority. In essence, they boiled the entire bill down to that upper limit threshold of 5000 illegal immigrants per day. The extra enforcement, the money for border agents, the restarting of the wall construction, the closing of the asylum loophole, the end of Catch and Release? None of that mattered. It was all boiled down to that 5000 number that you’ll see repeated over and over again in Republican criticisms of the bill. What’s worse is that this number is presented as a capitulation to Democrats rather than a ceiling on the use of a draconian new power granted in a heavily conservative bill. It’s presented as if the bill mandates open borders for the first 5000 illegal immigrants every day, and only then begins to enforce some border policies. This is so laughably, bafflingly wrong that it defies belief.

Obviously the bill isn’t perfect. There are legitimate criticisms that could be levied. For instance, Republicans could say that Democrats shouldn’t get any new legal immigration in exchange for fixing the law, even the paltry 50K number that the bill would mandate. But actually analyzing the bill to any serious degree would quickly show how conservative it is, so Republican leaders mischaracterized the bill so heavily that I’d say most reasonable people would classify it as “outright lying”.

In the world of Republican vibes, there’s the idea that conservatives are always the suckers when it comes to immigration. The idea is that Reagan’s bill was supposed to fix the issue, but the Democrats skillfully reneged on their promise. There’s also the idea of the ratchet, that Republicans will compromise with Democrats, and Democrats will get a bunch of concessions but won’t actually fulfill their end of of the bargain, either because the Republicans are RINOs who don’t actually care about limiting immigration, or because the true-believer Republicans are simply outmaneuvered. Then in the next round of dealmaking, more concessions will be given, and on and on it goes until America is overrun with illegals. For example, in the first deal, “illegal aliens” are reclassified as “illegal immigrants”, and amnesty is provided for, say, 3M of them in return for enforcement of the border laws. Then the enforcement doesn’t happen, ten years go by, and another round of negotiations happens. This time “illegal immigrants” is changed to “undocumented persons” and now we need to give amnesty to the first 3M AND the 5M that arrived since then, but in exchange now we’ll totally have enforcement… pinky promise! And then it doesn’t happen again and… you get the picture.

There’s a kernel of truth to that idea, although it’s obviously extremely oversimplified and lacking in nuance. That said, those vibes are powerful enough that compromise is thoroughly delegitimized for the Republican rank-and-file. Trump’s uncompromising vibes in 2016 is a large part of what won him the Republican primary. He sustained those vibes through his presidency with his bombastic executive orders that drove news headlines but did little to fix the underlying issues. Trump used those vibes again to kill this bill, as all he had to do was vaguely point to the 5000 number in the bill, imply that was a concession, and the bill was effectively dead no matter what it actually would have done.

Other concerns with the bill

While the misrepresenting the 5000 number in regards to the Border Emergency Authority was the most frequent criticism by far, there were a couple other, less goofy criticisms that deserve examining.

The first is that Biden already had the tools to solve the border crisis, and therefore this bill wasn’t necessary. This is typically paired with vibey “Republicans cooperate, Democrats defect” arguments that I detailed in the previous section, i.e. that the bill must have been a “trap” of some sort. Vibes aside, there is some degree of truth to this. As we saw earlier in this article, Biden’s policies were indeed principally responsible for the recent explosion in illegal immigration. Probably the clearest remedy would be reimplementing the Remain in Mexico policy that has been shambling along, half dead. Biden attempted to kill this policy early in his presidency, and courts initially agreed he could do so, until they didn’t, so the policy is technically still alive. Reimplementing this would take at least some of the wind out of the vicious cycle in regards to the asylum loophole, although there would still be the omnipresent specter of legal threats, and now Mexico has said it will refuse to cooperate.

The issue with this idea is that even if Biden were to reimplement all of Trump’s executive orders, they still amounted to little more than a bandaid on a bullet hole. Critics of the bill are technically correct in pointing out that there was less blood before Biden ripped off the bandaid, but it’s ludicrous to then assume that the bandaid was all that was ever needed. US immigration law and border enforcement is fundamentally broken in a number of ways, and this bill would have gone a long way in addressing the worst problems. Recall that Trump himself tried to go after asylum laws directly, but his efforts mostly fizzled in the courts.

Another criticism that was sometimes levied is that Republicans should simply hold out for Trump to become president to truly fix immigration. Again, this typically came packaged with vibey concerns that any deal with Democrats must necessarily imply some ratcheting of concessions, and thus the only way to address the issue is unilateral Republican action, headed by a true-believer like Trump. To steelman this idea, the idea that the political capital to solve illegal immigration would evaporate if the issue was successfully mitigated is a sound one. Democrats were only willing to come to the table in the first place due to the extremely tenuous position they found themselves in with the surge of illegal immigration. This bill almost certainly would have solved that surge, which would give Trump less of a mandate to take drastic action if he wins in November.

The most obvious retort to this idea is that Trump is by no means guaranteed to win in November. As of the time of writing, prediction markets give Trump a 47% chance of winning, which we can round up to 50%. This essentially means the Republicans are gambling on a “double or nothing” approach, but even this prospect is unsteady. For starters, how much more could Trump deliver in excess of this bill, even under the best plausible conditions? HR2 is instructive here, which passed the House in 2023 but is not likely to advance any further in the current Congress. As such, it’s essentially a conservative wishlist on immigration. It is indeed stronger than the Senate bill, but it’s not massively stronger. I’d say instead of “double or nothing” it’s more like “10-20% more or nothing”, which has decidedly less of a ring to it. Furthermore, Democratic willingness to capitulate has an expiration date. If the moment isn’t gone already, then it’d definitely be gone when Trump takes office for a second time, which would mean he’d require control of both the House and the Senate to push through a stronger bill. Prediction markets currently give a 74% chance for Republicans to clinch the Senate, which we can round up to 75%, and a 44% chance to win control of the House, which we can again round up to 50%. If results from the races were perfectly independent, simple statistics shows us that Republicans only have <20% chance of achieving a trifecta. Granted, the races almost certainly won’t be uncorrelated with each other, but this still establishes a lower bound of likelihood. In essence, Republicans are gambling at 20-50% odds that they’ll be able to get a bill that’s 10-20% better. Even this is still underselling it, since it would have to go through one major final hurdle: Trump himself. Republicans already had a trifecta from 2017-2019, yet Trump chose not to prioritize immigration other than through flimsy executive orders. Who’s to say he wouldn’t choose to do so again?

The upshot

I’m sure some people will dismiss everything I’ve written here as concern trolling. They’ll assume I’m secretly a Democratic operative who wants to sow discord amongst Republicans. In reality, I’m just someone who actually wants to get immigration under control. Immigration can be a source of strength, but it must be harnessed very carefully to not cause major problems.

This bill represented the most conservative major immigration reform in a generation that actually had a chance at passing, and Donald Trump killed it for purely cynical reasons. This single bill would have done more than every one of Trump’s executive orders put together. Anyone who’s been seriously watching him knows that he’s utterly self-serving, but what was truly revolting was how the anti-immigration wing of the Republican party not only let him get away with it, but actively cheered him on. It’ll likely be totally forgotten too, wrongly dismissed as nothing more than another Democratic trap.

The worst part of the bill was that many of its provisions weren’t permanent. Some parts like closing the asylum loophole were, but the funding for extra agents would eventually run out. Similarly, other provisions like the incoherently reviled Border Emergency Authority were due to sunset in 3, 5, or 10 years. But the correct response would have been for Republicans to reach out at this golden opportunity with both hands and grasp as hard as they could. Then, they should have fought future battles to ensure the provisions were made permanent. Instead, they squandered a period of maximal Democratic vulnerability on the issue, when the Dems were not only willing to give concessions but were actively asking for them.

Illegal immigration has cooled a bit since its apex in December of 2023. In the CBP’s most recent report from March, encounters are down by 45%. This is still massively elevated from where it was before, but it will at least allow Biden to claim he’s on top of the issue. It seems he’s doing this with ad-hoc fixes, like making deals with intermediate countries that are unlikely to really solve much long-term. In killing the bill, Trump has likely undercut one of his attack vectors against Biden somewhat. When pressed in a debate about the issue, Biden can say “I tried to fix it, but you wouldn’t let me”. In the end, few peoples’ minds will be changed, and the most likely outcome no matter who becomes president is that the US continues muddling along with the status quo on immigration, which means more bandaids and can-kicking. In the off chance that an immigration reform bill actually does pass, it will likely be far less conservative than this bill would have been.

This story is a great encapsulation of two important phenomena:

  1. How utterly asleep at the wheel most Europeans were in regards to Russia, especially post-Crimea.
  2. How much more dangerous Russia could be if they got a handle on corruption. But alas, no dictatorship can really solve corruption since it's too beneficial to the leader at the top for maintaining his position.

This post really gets my troll senses tingling. An account that's less than a week old posting about HBD, which is probably the most offensive topic a leftist would come across. Then starting said topic with "as a black person". Then not really saying much but vague agreement. I could easily see this post being the result of a leftist forum user from some other site saying "Hey guys, I'll go to that place where Nazis justify racism, and pretend to be a black man agreeing with them. Then they'll show us what they REALLY think!!!"

If this isn't the case then I'd suggest posting that essay type post sooner rather than later, as you'll get more interesting answers that way.