The action was rather formulaic - if any plan was announced explicitly or implicitly, you could guarantee something would go horribly wrong within five minutes
Are there many movies that don't follow that trope? If so, how? It seems like an exceptionally difficult cliche for screenwriters to avoid. If you announce the plan and nothing goes wrong, you just wasted everybody's time telling them something redundantly before you show the same thing later. If you don't announce the plan and something goes wrong, you've just confused everybody. If you announce a plan and it seems to go wrong but the real plan is going right then the added levels of contrivance are just a more played-out and mockable trope.
I've only seen a half dozen or so Tommy Lee Jones movies, but you'd think that would be enough to prove to me that he can act, and yet I have to say it wasn't. It did prove that he could act the hell out of one particular fantastic character archetype (highly competent authority, caring in actions but very sparing with warm words, clever with a little dry wit but otherwise blunt and no-nonsense), but that's not always the same thing. Can you suggest anything I should watch that exhibits more range from him? (Please don't say "Batman Forever"...) Some of my favorite movies to watch for the first time were ones in which a popular-but-typecast actor goes way outside of their prior comfort zone (e.g. Jim Carrey in "The Truman Show", Arnold Schwarzenegger in "Twins" or "Kindergarten Cop") or at least plays a part where their usual comfort zone is just a small facet of a more complex character.
in his 1902 work Der moderne Kapitalismus
Looks like it only made it into the third volume? So the term is "only" a little over 100 years old now, and the "stage" it purports to describe is only a little over 110 years old.
To be fair, Sombart describes the previous two "stages" as being about 150 years long each, so maybe he wouldn't have thought we're quite due for the end of his last one yet?
I'm never sure what the interviewers have in mind; neither, I suspect, do they.
If you ask for details, the short-term plan usually boils down to some form of palace economy, with the interviewers imagining themselves among the decision-makers inside the (in this case metaphorical) palace rather than among the decision-targets outside. The long-term plan in theory is for their wonderful planning to allow everyone to flourish and thereby become some sort of New Soviet Man, who will simply make correct and selfless decisions himself and thus allow the decision-makers to return back to their labors. In practice for some reason the long-term never seems to come.
Dorm living (for students from out-of-town at least) isn't very self-selected, and manages constant, recurring proximity pretty well, for colleges with a high enough percentage of students living on campus. I think my college had around 75% of undergrads living on campus at any one time, nearly 100% living on campus at some point - probably nothing of this applies to "commuter colleges". Most importantly, dorm life forces students into constant, recurring proximity in their free time after class, which makes it a lot easier to turn proximity into friendship than the few minutes of free time in between high school classes or whatever opportunities a teacher provides for kids to socialize in classes.
Thinking back: of my top 11 friends in college, I may only ever have had a class with 2 of them. 7 of them (and IIRC one of those 2, too) were people I met in the dorm. I only actually lived in a dorm for 2 out of 4 years, too, but even during the other years I'd be pulled into proximity with dorm residents whenever I'd visit friends or a girlfriend there.
at least after the very beginning
I think this might be a stronger claim. I made something like half of those friends in my first month at college, when the (non-local, which was most of us) freshmen were all in "must make new friends now" mode while the upperclassmen were in "must be nice to the nervous new freshmen" mode. Miss that window somehow and you'll probably at least need to keep your ear to the ground for opportunities to go to clubs and study groups and public parties and such easy-to-enter quasi-social circles.
The details in the 1-star reviews' text can sometimes be helpful; just ignore the ones where the reviewer's an idiot and see what legitimate complaints are left. But yeah, uncalibrated numeric ratings are worthless.
"From the Halls of Montezu-uma,
to the shores of Tripoli,
but not twenty-five percent fa-arther,
cause that'd just be crazy!"
Seems like it ruins the original's meter and spirit.
Maybe focus on "needless"; don't throw everything at the wall to see what sticks.
It's not an assumption, but a conclusion of three propositions:
1. Artificial General Intelligence will lead to Artificial SuperIntelligence carrying out its own goals.
There's no direct evidence for this (for obvious reasons), so maybe it's wrong, but it's really hard to come up with examples of technologies where we did manage to match nature but didn't manage to best nature soon afterward. We can fly 3 times higher and 18 times faster than any bird (or 10,000 times higher and 100 times faster, if you count spacecraft). We can lift 600 times more weight than an elephant, and dive 3 or 4 times deeper than a whale. We have alloys ten times stronger than bone, and weapons a hundred thousand times more lethal than any jaws. It's unlikely that the best medium to host intelligence is wet meat, and when we have better intelligence we're likely to get faster technological improvement, and if faster technological improvement leads to even better intelligence then the scope of that positive feedback loop is incalculable. Once the loop goes far enough, humans are no longer in it, and objecting to the subsequent directions it takes might be about as effective as chimps throwing feces at an incoming nuclear warhead. Either we get AI goals right from the start, or we don't.
2. Most goals that don't explicitly include "don't be omnicidal" end up implicitly entailing "be omnicidal", and even goals that do include "don't be omnicidal" can get closer to that then we'd be comfortable with.
I don't care much about ants, so I happily live in a home and drive on roads and go to buildings where we paved over all the ants that used to live there. I didn't hate those ants, it's just that they were using atoms which I wanted to use for something else, as the old saying goes. I do have goals that include "don't be omnicidal", even of ants, so if we got close to actually driving many ant species (or species that prey on them) extinct then I'd want to hit the brakes, but in the meantime I'll poison any ant hill that gets in the way of, say, having a slightly nice lawn.
3. It's nearly impossibly hard to accurately formalize our goals, and in the end all software is a formal set of instructions.
The worst software is software that was almost correct. Folks tried to write software and firmware for a particular new hard drive interface, but there was an incompatibility and it got the "edit the drive contents" part correct but not the "to what the user wants" part, and a friend lost his files. Folks try to write software to do things locally for its users based on what it reads in incoming internet packets, and sometimes they get the "read in incoming internet packets" and "do things locally" bits correct but not the "for its users" bits, and then a thousand computers are pwned by a Russian botnet. In those sorts of cases we just delete everything and restore from backup, but if software intended to edit the universe goes badly, we don't want to delete and we don't have any full backups.
This is the proposition that's gotten the weakest recently, now that we've basically given up on formalizing AI goals and are training them instead. I'd say it makes conclusions of Doom much less certain, and I'd love to say that it's made them weak enough to refute them ... but how well is the training going? AI still (albeit more and more rarely) even makes blatant mistakes of fact, including in cases where checking self-consistency and checking against external research could have corrected it. Mistakes of morality are much trickier. The is-ought problem means you've got to get ethics mostly right before self-consistency can help you correct any remaining mistakes. "External research" in questions of morality gets us to countless mutually-incompatible religions and ideologies, generally with many mutually-incompatible interpretations. AI alignment is unmoored from objective reality in a way that AI capabilities aren't, so it's still quite possible that the latter will greatly outpace the former.
I might go see a fourth Iron Man movie. I'm unlikely to go see Ant-Man. When we get to "nobody knows or cares about this character, why are they getting a movie?"
Iron Man was the first "nobody knows or cares about this character". He was the best B-list guy that Marvel Studios could pull out of a hat, and they settled for him because their A-list characters (Spiderman, Wolverine and the X-Men, Hulk, Fantastic Four) all had movie IP either sold to or at least encumbered by other studios. They had him played by Robert Downey Jr., then a C-list actor most famous for tabloid-bait substance abuse problems. It just turned out that RDJ was still an excellent actor, who managed to answer the "why are they getting a movie" question so well that we forgot it was ever even a question.
Later they started digging into their D-list characters ... and they still managed to hit it out of the park: Guardians of the Galaxy is the top-rated non-sequel movie in the MCU.
The problem isn't that nobody cares about C-list Ant-Man (whose first movie is higher-rated than Iron Man 2 or Iron Man 3), the problem is that the damn producers, directors, and writers stopped caring about Ant-Man. In Ant-Man 3 there's no significant character growth, meager personal/emotional stakes, no proper utilization of the drama they set up for him in Endgame, a cast crowded to the point that he felt like an extra in his own movie, and "ACTION!!!" that's so flooded with CGI that even the most basic physical conflict feels about as tense as playing a video game. Ant-Man seems to primarily be there because giving him top billing was expected to lure in an audience (which your testimony suggests was somewhat pointless), and their major concern for the audience was that we be exposed to a plot focused on setting up Kang as a multi-movie villain (which turned out to be completely pointless after they had to fire Jonathan Majors).
Their new plan is to bring in Doctor Doom (an A-list villain), played for some reason by RDJ (now an A-list actor), but you still might want to consider staying home and browsing Netflix, because dragging back RDJ suggests that they're still focusing on how to lure in an audience rather than on A-list writing.
I also buy pre-peeled garlic, which keeps pretty well
Infinitely well, if you chop and then freeze it. I learned this from my wife, who generally hates frozen vegetables, but who always has a baggie of frozen garlic ready, shaped into a thin pattie so it's easy to break off chunks of various sizes. It somehow thaws and cooks to be indistinguishable from fresh, better than any jarred garlic we've tried, and infinitely better than trying to sub in powders or salts in a recipe which needs fresh.
95% of party members are too sycophantic to go against the party line, but do be careful to research a bit before casting protest votes, in case your state has one of the other 5%.
"Our lands were taken from us before, and God willing, we may one day seek them." - Rep. Ilhan Omar
The idea that they should retake Somaliland is actually the most charitable interpretation of that speech; the uncharitable interpretation is that she was suggesting that they should retake all of "Greater Somalia", including parts of modern Ethiopia and Kenya.
it has to be really, really bad before the company starts cutting you off
In the software world we call this "missing test coverage". If your safety features don't get tested until any test failure is apocalyptic, you don't actually have safety features. Maybe we should be picking more politically neutral or less politically relevant test cases, but anything is better than nothing.
If you're worried about big society-spanning plagues then those are difficult
If they're pre-existing plagues, then they're difficult-to-impossible. Anything you can get by introducing a few mutations into some virus is at most a few mutations away from a virus that wasn't currently a society-spanning plague. Centuries ago you could have a germ slowly co-evolve with the immune systems of some subset of humanity and then eventually make its way out to devastate a larger immunologically unprepared population, but these days there aren't many subsets of humanity that aren't at most a weekly airplane flight away from the rest of us.
If they're not pre-existing plagues, it's kind of harder to say, isn't it? Gunpowder would have been a pretty awesome capability for a predator to have, but it was impossible to evolve except by the extremely roundabout method of "get intelligence to come up with it". There may be similarly awesome capabilities that are only possible to put into germs in the same way.
I don't want 'suppress info' to be the default response.
Nor do I ... but while I'm libertarian enough to have voted (L) in every presidential election, I'm also pessimistic enough to wonder whether how amenable to my desires the universe really is. Totalitarian suppression of change is itself an existential risk, whether it fails (which historically tends to be a bloody process) or succeeds (in which case a "boot crushing a human sapient face forever" is itself a possible contributor to the Fermi paradox), but the seemingly-obvious solution of "just don't do that" might seem less obvious in a world where a home biolab ends up being a thousand times more dangerous than an airline ticket and a boxcutter were in our world.
No, that's the first goal of a government. And Constitutions are the means of achieving that, not just a means, because it's such a complex and difficult-to-evaluate goal that you have to operationalize it in terms of more objective rules; otherwise in practice it stops being a goal and starts being an excuse.
The exact quote was, "the first duty of the American government is to protect American citizens, not illegal aliens" ... which is a bit more obviously Orwellian than your paraphrase. Maybe that's just a question of style; as a question of substance, both the original quote and the paraphrase are wrong.
The first duty of the American government is to obey the Constitution.
That's a complicated duty, and far-too-often breached, but at least technically that's the duty that qualifies our leaders to fit the definition of "the American government" rather than for an entry in the wiki for auto-coup. It's not always even a popular duty, though it's generally at least popular enough that "pass an amendment to widen what other duties the government can legally handle" doesn't ever get considered. Many people think violating the Second Amendment would be a good way to protect American citizens from shootings. Most think violating the Tenth Amendment often helpfully protects American citizens from being taken advantage of. Some think violating some combination of the Fourth through Eighth Amendments is a good way to protect American citizens from criminals. A few think violating Art. I election laws could be justified to protect American citizens from bad politicians. Many thought that violating the Assembly Clause was justified to protect American citizens from Covid.
They're all wrong.
Those are all real threats that American citizens deserve some protection from, true, and so are illegal aliens (both in the sense that some are serious criminals and in the sense that all of them do a little bit to undermine the rule of law), but the concept of protection is not a backdoor password to unchecked power, and it it seems pretty transparent that the people who attempt to use it that way are more interested in the power than in the protection.
Just general anti-bot stuff, probably, though the desperation for more AI training data probably explains why bots got so ill-behaved a few years back. Our CI server has to hide even open-source logs behind Cloudflare settings harsh enough to block cURL, else the traffic from spiders can bring it to its knees. "Figure out how to get Codex to emulate a full browser" is on my TODO list somewhere...
It seems to be a bigger thing than it was when I was a kid, even. Difficulty has increased by roughly one level (school->chapter, chapter->state, state->national, national->good-luck) over the past few decades, and that seems to be well-calibrated to account for how much more intense the competition is.
My experience is that kids universally understand this simple concept, and that it takes a calculus teacher to beat such sensible reasoning out of them.
Normally I have a least a tiny bit of sympathy for educational "mainstreaming", but this really is the sort of thing that ought to be handled well before calculus by at least having some geeky books on hand for the faster kids to read while the kids who need review are covering fractions for the fourth time. Maybe most kids can't learn the standard stuff faster without getting stuck completely out of sync with the teachers' lessons, but asides like "infinity as a limit" vs "infinities in cardinal numbers" vs "infinities in ordinal numbers" ought to be written up in a child-friendly presentation somewhere, right?
I let a MathCounts club nerd-snipe me a month or two ago with the question "is infinity a number". I managed to avoid diving into set theory and losing them, but went through enough of the "things you call numbers today that weren't originally thought of as numbers" (zero, fractions, negatives, irrationals) and "things you'll call numbers later that you don't think of as numbers today" (imaginaries) to get across that names like "number" are a matter of definition.
be weary of anyone who does pedophile lite behavior
"wary". Though in the context of serial offenders on two continents with decades of abuses it's an understandable typo.
Most of the games my kids like fit your definition but don't really fit your examples. Listing them all anyway, in roughly increasing order of how much I like playing multiplayer games of them:
- Stardew Valley
- Minecraft
- Core Keeper
- Don't Starve Together
- Team Fortress 2, Mann vs Machine
- Project Zomboid
- Wildermyth
Of course the all-time great is one I haven't introduced my kids to, because it really needs closure and if you play it in 40 minute chunks you'll need like a hundred of them: Baldur's Gate 1+2.
If you're curious about any of those let me know and I'll elaborate.
One of my kids likes playing Peak with her friends, but the rest of us haven't tried it yet.
I think the key words here are "aimed" and "government agency". Amazon famously didn't make its first annual profit for nearly a decade, but investors were still expecting profit eventually, estimating the likelihood of net profit in the long run, and wouldn't have funded it indefinitely if that expectation ended. A government agency has no such aims and no such limitations, whether or not it does its own production, but at least if it has to procure from among competing third parties there's someone who has an incentive to keep costs down.
If the murder rate stays constant, but “rate per potential exposure” gets worse, someone is getting exposed at a higher rate.
Just the opposite. murders / population = murders / exposures * exposures / population. If murder / population is constant while murder / exposures increases then exposures / population, the exposure rate, must be decreasing inversely.
Shouldn’t it be strictly easier to tell which neighborhoods have turned into death traps?
Is it? I know there are sites that give neighborhoods "walkability" scores, but at least the first one I pulled up is only giving a theoretical number based on the mass transit availability, distances to the nearest grocer/cafe/school, etc; I'd have no idea how to find an actual number of people who walk down a particular street (or who drive in a particular area - the only armed robberies I found out about first hand were at a stoplight and in a parking lot) on an average day.
nation-building wasn't yet a dirty word
Even before 9/11, "nation building" was enough of a dirty word that popular opposition to that exact phrase helped give the Presidency to ... checks notes ... George W. Bush.
I think Bush did realize that 9/11 gave him ... not a blank check, but a ton of latitude ... but he also realized that he was cashing in that latitude just by reversing his campaign's attitude and launching major foreign wars, and so he was naturally (if mistakenly) reluctant to go all the way and admit that any such war wouldn't actually be worthwhile unless and until we built a non-hostile nation in place of the one the war knocked over. We instead just prayed that the Northern Alliance would step right into the power vacuum and develop such a nation for us, and so instead of sending in your 500,000 troops to rebuild we just sent in ... 5,500? Roughly one person for every 3,500 Afghan people? That sounds like such a tiny force that I'm tempted to look through the wiki history for vandalism, but in any case it was enough to handle the "knocked over" phase of the war admirably; it was only afterward that we should have either left entirely or gone in on rebuilding en masse rather than hoping to get away with the "advisory" gambit alone this time.
Source: you made it up and it sounded too good to check.
A couple months ago Elon Musk reposted this tweet including: "Be very, very strict with SNAP, Section 8, and EBT. Force these do-nothings to get up and go to work." This might say questionable things about his thoughtspace or his priorities, but not his money. The net worth of the world's richest man increased by $200 billion last year.
I think there's room to ask about whether, even as the crime rate-per-population has gone down dramatically, the rate-per-potential-exposure has been less changed or has gone up. As Scott says, "We’re a safetyist culture"; we avoid risks more than we used to. We also have more attractive alternatives to risks - where I would play sports in the street or at worst play video games in person with the neighborhood kids, my children go to the rock-climbing gym or play networked video games with their friends farther away. I grew up in a residential area where once I got old enough I could walk to a convenience store, perhaps past some sketchy houses; my kids are growing up in a giant suburb where it wouldn't matter if the houses were sketchy because there's nothing they could get to on foot regardless.
On the other hand, the answer might just be "no, the rate-per-potential-exposure has gone down too". Or it might be that this isn't a sufficiently well-defined metric, because in a big country there's always someplace where it's just too dangerous for an innocent person to go and someplace else where it's perfectly safe and there's no obvious way to decide how to weight those places when averaging.
- Prev
- Next

I've never seen a Steven Seagal movie and I always figured if I did it'd be "Under Siege", but man, even the (VHS release?) trailer on IMDB excitedly sums it up as "Die Hard on a battleship", and IMHO the Die Hard On An X genre is a pretty risky one. There are some decent TV episodes that got enough mileage out of just putting familiar characters into that situation, but for a movie you've got to have some great idea on top of the "Die Hard" premise to make it feel like anything other than a cheap knock-off.
"Cobb" I'd never heard of before, though, and it looks surprisingly interesting. The trailer alone makes me think I was too skeptical of Tommy Lee Jones. Thanks!
More options
Context Copy link