The Louisiana Purchase is usually cited among Jefferson's claims to fame. But "bought a quarter of the country for a song" is impressive in a way that historically-standard conquests usually aren't; Polk isn't given much reverence for Guadalupe-Hidalgo.
What's your preferred genre(s)? What's your tolerances for the newest "hope you spent $600 on a GPU recently" and for the oldest "just enough pixels to jog your imagination" games? How many hours of play time are you looking to spend?
My favorite games of all time are probably Portal, Deus Ex, and Star Control 2. Honorable mentions to Outer Wilds, Civilization (especially 4 and 5), Skyrim, Kerbal Space Program, (Telltale's) The Walking Dead, and Baldur's Gate.
Oh, but for
I was hoping for subjective descriptions of fun, interesting gaming sessions people have had recently.
I think I'm stuck. "Fun, interesting" and "recently" make Outer Wilds a shoo-in (I literally bought a Steam Deck dock and bluetooth controller solely so that when I recommended it to my kids I could watch them play on the big screen), but no self-respecting Outer Wilds player would describe Outer Wilds to someone else! Half the fun is encountering everything for the first time and trying to figure out how it all works and what it's all about.
Efficient long division
Wait - you value this, but not polynomial division? They're both things you can just ask the computer to do for you instead, but at least polynomial division requires you to hunt down a computer algebra system; long division capability come pre-installed on every phone.
IMHO it's not one of the best things he's written in recent years (I'd put Vibecession: Much More Than You Wanted To Know and Prison And Crime: Much More Than You Wanted To Know above it, for the research), but it is his best writing in recent years.
Ah, but the catch is that using the SAT directly still looks too suspicious for an employer to do, so you have to use the whole college degree instead. That could be an even better filter (assuming you keep track of which colleges still use the SAT) because it includes a measure of conscientiousness, but it's also a vastly more expensive filter, and the mix of "do they have a high enough IQ, and can they afford tuition plus four years' opportunity cost" might have a bigger disparate impact than IQ alone.
Nobody gets 2 weeks off specifically in August, but even new hires typically get 10 days of paid vacation, as counted separate from paid sick leave, per year; total PTO for all working Americans averages around 24 days per year. That includes Christmas vacation time, but pending a big chunk of the remainder for a summer vacation (though more often July than August) is pretty popular.
On the other hand, we don't use all our leave. The majority of Americans report unused PTO days in any given year, an average of 6+ per person. That kind of resonates with that commercial, as well as with my own experiences. I got a nice check for unused vacation days when leaving my last job, and these days my vacation planning is constrained by a rule preventing me from rolling over more than some maximum (7 weeks?) of leave days from year to year, so I end up "burning" some on 3-day weekends when I can't drop work for a week or more at a time.
It's entirely possible to do this on one's own backyard.
Your backyard may vary. Rabbits and mice sneak under my fences easily, which I think explains why, although I can get tomato plants to grow like giant weeds, their fruits tend to vanish on me almost immediately after ripening, before I can pick them myself.
Into/Across Spiderverse. I reserve the right to throw this out if the third movie is atrocious, but unfortunately there's a real risk of that happening.
"Into the Spiderverse" would stand up okay on it's own, I think, even if the trilogy doesn't stick the landing, but I agree that it's only going to belong in a "favorites" list if all the checks they wrote in the first half of the sequel (I can't even call it the first sequel; not enough closure) don't bounce.
X-Men First Class => X2 => Days of Future Past. The payoff for this one is immense
That's an interesting watch order. Why leave out the first X-Men movie? I understand deciding that the benefit of "Days of Future Past is a little better if you watch X-Men 3 first" isn't worth the cost of "but you have to watch X-Men 3 first", but watching X-Men before X2 is win-win.
Are you using "thinking mode" or "research mode" with your LLM(s)? With advanced math even the latest models will still hallucinate on me when they're just asked to immediately spew output tokens, but at least ChatGPT 5 has gotten good enough with "reasoning" that I haven't caught it in an error in output from that. (Some of this might be selection bias, though: waiting 5 minutes for a response is enough hassle that I'll still ask for immediate-output-spew for anything that's easy enough to double-check myself or non-critical enough that I can live with a few mistakes)
I still wouldn't rely on any claim for which it can't give me either a step-by-step proof or a linked citation, and with history you're stuck with citations as your only option, so ask for (and follow up on, and worry about the quality of the sources of) those.
The only problem is that they give you completely different answers! Of course, I could just rely on how plausible their answers sound (if they can fool me, they can fool the players), but I am too neurotic for that.
You want to keep your narrated facts straight, and you want your worldbuilding to be consistent with the facts, but don't be afraid to add a few sets of conflicting lies and half-truths to your dialogue. There's only one past, but there are sometimes multiple conflicting histories of it and there are often multiple conflicting perspectives on it. Consider the Brazilian vs the American attitudes toward Santos-Dumont vs the Wright brothers.
This was my point, maybe I should have put it at the end and not the beginning.
Your main point could have been that "this is an amazing pasta maker!", and could be entirely correct, but no choice of sentence placement is going to get people to ignore any other inflammatory points you happen to make along the way. If you hate inaccuracy, callousness, and polarization, you have to be very careful not to drop a blood libel in the middle of your post; otherwise it doesn't come off as hate, just jealousy.
Instead their conclusion is, to tweak your phrase, "it should have been immediately clear to the ICE officer and to viewers that that suddenly-accelerating SUV did pose a threat of death or grievous injury"
No, it wasn't. So many failures of theory-of-mind going on right now.
For me, too, to be fair. It now seems that my theory (that you think it should have been clear to the officer that the car hitting him posed no serious threat) was wrong, and instead your mistaken belief is that it's a requirement for self-defense that the threat be immediately clear? That would just get us back to my first SNL gag reference, days ago,
"I think a good gift for the president would be a chocolate revolver. And since he's so busy, you'd probably have to run up to him and hand it to him."
Obviously suddenly brandishing a chocolate revolver will never be an clear threat, because it's not actually a threat, but a reasonable officer would construe it as a threat and would be justified in using deadly force to defend against it. Likewise, even if the driver of a vehicle wasn't gunning it hard enough to spin out her tires on ice, an arresting officer in the path of the vehicle is allowed to interpret the criminal's car suddenly accelerating into them as a grievous threat, and is not required to think "but what if I double check the tire angle" before it may become too late.
Do you think the administration's reaction to the shooting is a "reasonable path for America"?
Nope. Goods crimes were obstructing justice, harassment, and vehicular assault, not terrorism. But I can't really directly respond to the administration, not with anything more serious than upvoting someone else's twitter response to a higher-but-still-rounds-to-the-same-number total. Trump never became a Motte poster. (Even if he did, I couldn't imagine him obeying the rules here well enough not to eat a permaban within a year.) If an administration called Good's actions "terrorism" here, I'd give them the same pushback and chance to correct that that you got for "murder", and the same downvotes to posts where they either failed to correct or repeated the libel.
I get that it feels unfair that you're being held to higher standards than the White House just because you're here and they aren't, but what's the alternative? If I said you couldn't meet standards higher than theirs I'd deserve a mod warning for such an egregious insult.
My first impression is to read it as either a condescending sneer or a threat of violence, depending on the tone.
"Just walk away." is the first thing I think of ... I can't believe it didn't make the Trope page!
It’s crazy that someone would offer that to children as a social script.
Everybody sees the dangers of cultural appropriation once it's their culture.
In an ideal world "StarCraft 2" and "SC2 but with better AI" would just be two different game variants, and a vanilla-SC2 player wouldn't complain about the AI options any more than a blitz-chess player would complain about someone else preferring to play without any clock.
But everybody's attention is a scarce resource vied over by competitors, and in a world where network effects make it much more enjoyable to have everybody else's attention go to the same target as yours does, it's actually reasonable to worry about whether an alternative is going to stop that from happening. If you actually preferred Betamax over VHS, HD-DVD over BluRay, etc, it sucked to be you.
I thought SC2 was popular enough that nobody should need to worry about splitting the player base, though; surely both sides of any split would be able to find online matchups easily for years to come? At the very least an experienced player who eschews better AI should be able to find a game against a noob who doesn't. Maybe video game fans have just been through so many iterations of the of "Sega Genesis vs Super Nintendo" fight that getting worked up about such things is a reflex now.
So they push back on it because they don't want to lose the thing they love, and they're afraid that's what would happen.
If you want to see these sorts of fights played out on Hard Mode, look at the worries some people have over driverless cars or vegan meat substitutes. The bailey is that driverless cars are unsafe or that vegan pseudomeats are unhealthy, and that no amount of technological improvement will ever make them good enough, but I think the (occasionally explicitly stated!) motte in each case is the risk that, once the new alternative actually is better for most people, there'll be pressure to make the traditional alternative outright illegal. Nobody's ever going to ban anyone's preferred versions of Star Trek or StarCraft, but animal rights groups or public safety groups might actually get some traction against real meat or human-error-prone cars once the main argument for them is pared down to "Freedom!"
Wait, there are multiple people confused about what OP is confused about?
I'd have presumed the grey area here is a belief like "it's illegal to hit cops with your car, but the response to a crime that poses no threat of death or grievous injury is supposed to be an arrest, not a shooting" (correct!) combining with a belief like "it should have been immediately clear to the ICE officer that that suddenly-accelerating SUV posed no threat of death or grievous injury to anyone" (not "obviously" correct, unless there's some really good video that contradicts what I've seen from seemingly-good-enough videos).
This actually is a scissor statement out of mythology, isn't it? It's not just obviously true to some people and obviously false to others, but so obviously so that people can't even imagine what chain of reasoning might lead someone to take the contrary position.
Hopefully @EverythingIsFine will pop in to explain that I'm right ... or that someone else is, or a different chain of reasoning still. If my guess is right then there's so many failures of theory-of-mind going on right now that I have to wonder how badly I'm doing myself.
information theory broadly defines some adjacent bounds.
Don't forget physics. We're probably nowhere near the limit of how many computational operations it takes to get a given "intelligence" level of output, but whatever that limit is will combine with various physical limits on computation to turn even our exponential improvements into more logistic-function-like curves that will plateau (albeit at almost-incomprehensible levels) eventually.
at the scale of economics, "singularity" and "exponential growth" both look darn similar in the near-term, but almost all practical examples end up being the latter, not the former.
"Singularity" was a misleading choice of term, and the fact that it was popularized by a PhD in mathematics who was also a very talented communicator, quoting one of the most talented mathematicians of the twentieth century, is even more bafflingly annoying. I get it, the metaphor here is supposed to be "a point at which existing models become ill-defined", not "a point at which a function or derivative diverges to infinity", but everyone who's taken precalc is going to first assume the latter and then be confused and/or put-off by the inaccuracy.
That said, don't knock mere "exponential growth", or even just a logistic function, when a new one outpaces the old at a much shorter timescale. A few hundred million years ago we got the "Cambrian explosion", and although an "explosion" taking ten million years sounds ridiculously slow, it's a fitting term for accelerated biological evolution in the context of the previous billions of years of slower physical evolution of the world. A few tens of thousands of years ago we got the "agricultural revolution", so slow that it encompasses more than all of written history but still a "revolution" because it added another few orders of magnitude to the pace of change; more human beings have been born in the tens of millennia since than in the hundreds of millennia before. The "industrial revolution" outdoes the previous tens-of-millennia of cumulative economic activity in a few centuries.
Can an "artificial superintelligence revolution" turn centuries into years? It seems like there's got to be stopping point to the pattern very soon (years->days->tens-of-minutes->etc actually would be a singular function, and wouldn't be enabled by our current understanding of the laws of physics), so it's perhaps not overly skeptical to imagine that we'll never even hit a "years" phase, that AI will be part of our current exponential, like the spreadsheet is, but never another vastly-accelerated phase of growth.
You're already pointing out some evidence to the contrary, though:
real humans require orders of magnitude less training data --- how many books did Shakespeare read, and compare to your favorite LLM corpus --- which seems to mean something.
This is true, but what it means from a forecasting perspective is that there are opportunities beyond simple scaling that we have yet to discover. There's something about our current AI architecture that relies on brute force to accomplish what the human brain instead accomplishes via superior design. If we (and/or our inefficient early AIs) manage to figure out that design or something competitive with it, the sudden jump in capability might actually look like a singular-derivative step function of many orders of magnitude.
I've only looked at his introductory post, so hopefully he addresses my point later, but the introductory post would seem to be the natural place to discuss why we don't have more amendment, and he does some discussion of that question, but with what I feel is only one of the multiple answers:
"...you need about 85%+ public support to ratify a constitutional amendment. It’s pointless because, if you could ever get that much public support for your divisive policy question, you’d no longer need a constitutional amendment, because you’d have won the argument and all the relevant laws already."
This is true for many object-level laws, but there are loads of exceptions. An Amendment allows you to credibly precommit to not change laws later, which makes it attractive for a number of tasks:
- Rules intended to protect human rights, where we fear our descendants might backslide enough to repeal a mere law but not enough to overturn an amendment.
- Rules intended to be compromises via universalizing principles, for which a law isn't enough to enforce the compromise. If I hate being unable to condemn some right-wing ideology and you hate being unable to condemn some left-wing ideology, I might hate the thought of losing my freedom to censors half the time more than I relish the thought of the same happening to you the other half of the time, and something like the First Amendment is a win for both of us, even if we couldn't get a coalition to protect either ideology alone. In a bad enough Culture War making such a principle into law may feel like it's just giving the other side a chance to get a 4+ year head start on attacking us again when they repeal the law first while they're in power, but an amendment might have more teeth.
- Rules which cover the biggest meta-level questions of how the mechanisms of government should work, the cases where the constitution already specifies a mechanism that can't be overridden by a mere law. The House Rules Committee can do a lot, but it can't reduce the requirements for overriding a Presidential veto (his proposal #1), or expand its size to 11,000 (his #4), etc.
And pretty much every one of his proposals falls into category 3 here, doesn't it? He's not suggesting a "Write the Roe v Wade penumbras into the umbra" amendment, or a "define personhood as starting with conception" amendment; all his stuff is procedural at a high enough level that you can't do it without an Amendment.
So ... why don't we do any of those Amendments, either, anymore? I'd say it's a combination of our increasing political polarization with the realization that, so long as we're trapped by Duverger's Law into a two-party system, every meta-level change is also a potential change in the equilbrium point of that system, a zero-sum game. Either more easily overridden vetos will mostly help the Democrats, in which case you're not going to get a supermajority because you can't persuade enough of the Republican-leaning half of the country to agree, or they will mostly help the Republicans, in which case you're not going to get a supermajority because you can't persuade enough of the Democratic-leaning half of the country to agree. Perhaps at some point we'll have enough people sick of both parties that that will be a voting block worth catering to? But until then this is all a sadly academic discussion.
This is pure "scissor statement" video, isn't it? It seems clear that the driver isn't trying to hit the cop, since she's steering away from him and away from the direction he's moving in, but even with three angles to look at it's not clear to me whether she hits him anyway (I think not, but I won't be surprised if badge cameras prove me wrong) or whether she would have hit him had he not already been dodging to one side (I think so, but again awaiting further evidence).
I think what makes up my mind is that I don't think the situation was clear to the officer either, not if he's having to make this decision so fast that his detractors are having to replay the clips in slow motion. In hindsight he could have done better, but we want to be able to hire even average cops, for a job where they'll frequently be surrounded by people who hate them and try to kill them, and that's not going to be possible unless we take seriously the sorts of "mens rea"/"reasonable person" requirements we should have to prosecute what might be a natural attempt at self-defense.
Back in the day, Saturday Night Live recognized that this was a funny joke:
"I think a good gift for the president would be a chocolate revolver. And since he's so busy, you'd probably have to run up to him and hand it to him."
It wasn't because they were a bastion of right-wing television, or because they thought Clinton had given murderous orders to his secret service agents, it was because they recognized that it would be ridiculous to do something that looks so threatening, even something actually innocent, without anticipating the likely consequences.
I don't even think that Yudkowsky was the best thinker on LessWrong. Both David Friedman and Scott Alexander (when he was on) surpass him easily IMO.
This is trivia, not science, but for kicks I decided to see how many LessWrong quotes from each user I've found worth saving over the years: Yudkowsky wins with 18 (plus probably a couple more; I didn't bother making the bash one-liner here robust), Yvain (Scott) takes second with 10, and while I have dozens of Friedman quotes from his books and from other websites, I can't find a one from LessWrong that I saved. (was Friedman was just a lurker on LessWrong?)
On the other hand, surely "best" shouldn't just mean "most prolific", even after a (grossly-stochastic) filter for the top zero-point-whatever percent. Scott is a more careful thinker, and David more careful still, and prudence ought to count for something too ... especially by Yudkowsky's own lights! We praise Newton for calculus and physics and downplay the alchemy and the Bible Code stuff, but at worst Newton's mistakes were merely silly, just wastes of his time. Eliezer Yudkowsky's most important belief is his conclusion that human extinction is an extremely likely consequence of the direction of progress currently being pursued by modern AI researchers, who frequently describe themselves as having been inspired by the writings of: Eliezer Yudkowsky. I'm not sure how that could have been avoided, since the proposition of existential AGI risks has the proposition of transformative AGI capabilities as a prerequisite and there were naturally going to be people who took the latter more seriously than the former, but it still looks superficially like it could be the Ultimate Self-defeat in human history, in both senses of that adjective.
PTSD, symptoms of which were recorded in the medical literature as far back as ancient Greece, as a mechanistic biological response to extreme injury.
Huh. Learn something new every day.
"An Athenian, Epizelos son of Kouphagoras, was fighting as a brave man in the battle when he was deprived of his sight, though struck or hit nowhere on his body, and from that time on he spent the rest of his life in blindness. I have heard that he tells this story about his misfortune: he saw opposing him a tall hoplite, whose beard overshadowed his shield, but the phantom passed him by and killed the man next to him." - Herodotus, "Histories"
I know "PTSD" used to be called "combat hysteria", then "war neurosis", then "battle hypnosis" and "shell shock", and with one name or another it seems to have been common for well over a century ... but I'd been told it's hard to find under any name in accounts of ancient wars. It was tempting to wildly speculate whether the reason for such a strange interesting fact might be technological (after explosive overpressure we can see physical brain bruising, not just psychological damage; we now experience most casualties from impersonal random explosions, not other humans in direct combat) or cultural (we now see a diagnosis of psychological trauma as a first step toward healing, rather than an insulting additional attack to be avoided; we now see war as a necessary evil, rather than a glorious good) or social (the ancient veterans that historians focus on were often large proportions of the upper class; modern veterans are more likely to be isolated). But it's easy to forget that often the explanation for a strange interesting fact is that false and exaggerated "facts" can go viral if they're sufficiently strange and interesting.
@yofuckreddit: Ask your doctor about dosages, too, when you go in for the surgery. When my son had a broken bone healing, the osteopath recommended levels that, although sold over the counter in the vitamin aisle, still had "don't take this without talking to a doctor about it" in the fine print on the jar. Unless you're super prone to kidney stones or something, long-term concerns about hypercalcemia can probably take a temporary back-burner to short-term bone healing improvements.
Did you see the video? I couldn't find a link (Voat's shut down, and a low-res thumbnail plus headline wasn't sufficient for my Google-fu), but I'd have guessed it would be anecdata rather than anything with which we could hope to calculate a frequency.
That third headline was easy enough to find the context for, though. It's on the witchiest-looking website you could imagine, and it's a little hyperbolic (house arrest with an electronic monitor isn't quite "roaming" "freely"), but it's hard to say that it was too hyperbolic, with at least a couple years of hindsight:
One condition of home-arrest required Huff to seek preapproval from a parole officer before having contact with children. But Huff was temporarily returned to prison in late 2018 after an eight-year-old girl was found in his apartment along with her parents.
In January 2019 the clemency board unanimously revoked Huff’s home-arrest and made his return to prison permanent. His only option now is to reapply once a year for release.
Is Wall Street allowed to get much money involved yet? Polymarket.com still lists the US as "blocked"/"completely restricted from accessing Polymarket", Wiki claims the block lasted until December 2, 2025 after Trump "eased the regulatory environment" (with a link to a headline that only mentions trading on election results), and I can't find anything that lists what the current regulatory environment actually permits.
and build a base of clientele and advance
Do you know if there are any good stats on what percent of lawyers are making excellent livings after they take some time to advance? New lawyer salaries have been scarily bimodal for decades now, but it's hard to tell the extent to which that's a career-long problem rather than something the lower half of the distribution just has to work their way out of over 5 or 10 years.
That's not a bad point. I'm old enough that "find a spouse before OKCupid gets bought out" was actually an actionable strategy for me, so I'm hearing the awful reports of modern dating apps second-hand, and I don't actually hear anything about modern non-app-based online dating.
Does it really exist, though? Naively, I'd have expected random Discord channels to be subject to the same social dynamics that work/school/etc were, wherein now that there's a "Find Your Dates Here" Schelling Point of the apps, more and more of the younger generations are starting to consider any but the most slow/careful flirting in ostensibly non-romantic contexts to be intrusive and creepy.
Thank you! I probably saw the reference in that very post and then forgot that I had.
- Prev
- Next

Yeah, but as an average of Good seasons and a Bad one, with the latter more recent and with it's biggest problem being the sense that they're out of good new ideas and are having to wring out old ones. Hence the upcoming movie that nobody cares about - it may turn out to be awesome, but I wouldn't recommend going on opening night to check.
More options
Context Copy link