Everybody sees the dangers of cultural appropriation once it's their culture.
In an ideal world "StarCraft 2" and "SC2 but with better AI" would just be two different game variants, and a vanilla-SC2 player wouldn't complain about the AI options any more than a blitz-chess player would complain about someone else preferring to play without any clock.
But everybody's attention is a scarce resource vied over by competitors, and in a world where network effects make it much more enjoyable to have everybody else's attention go to the same target as yours does, it's actually reasonable to worry about whether an alternative is going to stop that from happening. If you actually preferred Betamax over VHS, HD-DVD over BluRay, etc, it sucked to be you.
I thought SC2 was popular enough that nobody should need to worry about splitting the player base, though; surely both sides of any split would be able to find online matchups easily for years to come? At the very least an experienced player who eschews better AI should be able to find a game against a noob who doesn't. Maybe video game fans have just been through so many iterations of the of "Sega Genesis vs Super Nintendo" fight that getting worked up about such things is a reflex now.
So they push back on it because they don't want to lose the thing they love, and they're afraid that's what would happen.
If you want to see these sorts of fights played out on Hard Mode, look at the worries some people have over driverless cars or vegan meat substitutes. The bailey is that driverless cars are unsafe or that vegan pseudomeats are unhealthy, and that no amount of technological improvement will ever make them good enough, but I think the (occasionally explicitly stated!) motte in each case is the risk that, once the new alternative actually is better for most people, there'll be pressure to make the traditional alternative outright illegal. Nobody's ever going to ban anyone's preferred versions of Star Trek or StarCraft, but animal rights groups or public safety groups might actually get some traction against real meat or human-error-prone cars once the main argument for them is pared down to "Freedom!"
Wait, there are multiple people confused about what OP is confused about?
I'd have presumed the grey area here is a belief like "it's illegal to hit cops with your car, but the response to a crime that poses no threat of death or grievous injury is supposed to be an arrest, not a shooting" (correct!) combining with a belief like "it should have been immediately clear to the ICE officer that that suddenly-accelerating SUV posed no threat of death or grievous injury to anyone" (not "obviously" correct, unless there's some really good video that contradicts what I've seen from seemingly-good-enough videos).
This actually is a scissor statement out of mythology, isn't it? It's not just obviously true to some people and obviously false to others, but so obviously so that people can't even imagine what chain of reasoning might lead someone to take the contrary position.
Hopefully @EverythingIsFine will pop in to explain that I'm right ... or that someone else is, or a different chain of reasoning still. If my guess is right then there's so many failures of theory-of-mind going on right now that I have to wonder how badly I'm doing myself.
information theory broadly defines some adjacent bounds.
Don't forget physics. We're probably nowhere near the limit of how many computational operations it takes to get a given "intelligence" level of output, but whatever that limit is will combine with various physical limits on computation to turn even our exponential improvements into more logistic-function-like curves that will plateau (albeit at almost-incomprehensible levels) eventually.
at the scale of economics, "singularity" and "exponential growth" both look darn similar in the near-term, but almost all practical examples end up being the latter, not the former.
"Singularity" was a misleading choice of term, and the fact that it was popularized by a PhD in mathematics who was also a very talented communicator, quoting one of the most talented mathematicians of the twentieth century, is even more bafflingly annoying. I get it, the metaphor here is supposed to be "a point at which existing models become ill-defined", not "a point at which a function or derivative diverges to infinity", but everyone who's taken precalc is going to first assume the latter and then be confused and/or put-off by the inaccuracy.
That said, don't knock mere "exponential growth", or even just a logistic function, when a new one outpaces the old at a much shorter timescale. A few hundred million years ago we got the "Cambrian explosion", and although an "explosion" taking ten million years sounds ridiculously slow, it's a fitting term for accelerated biological evolution in the context of the previous billions of years of slower physical evolution of the world. A few tens of thousands of years ago we got the "agricultural revolution", so slow that it encompasses more than all of written history but still a "revolution" because it added another few orders of magnitude to the pace of change; more human beings have been born in the tens of millennia since than in the hundreds of millennia before. The "industrial revolution" outdoes the previous tens-of-millennia of cumulative economic activity in a few centuries.
Can an "artificial superintelligence revolution" turn centuries into years? It seems like there's got to be stopping point to the pattern very soon (years->days->tens-of-minutes->etc actually would be a singular function, and wouldn't be enabled by our current understanding of the laws of physics), so it's perhaps not overly skeptical to imagine that we'll never even hit a "years" phase, that AI will be part of our current exponential, like the spreadsheet is, but never another vastly-accelerated phase of growth.
You're already pointing out some evidence to the contrary, though:
real humans require orders of magnitude less training data --- how many books did Shakespeare read, and compare to your favorite LLM corpus --- which seems to mean something.
This is true, but what it means from a forecasting perspective is that there are opportunities beyond simple scaling that we have yet to discover. There's something about our current AI architecture that relies on brute force to accomplish what the human brain instead accomplishes via superior design. If we (and/or our inefficient early AIs) manage to figure out that design or something competitive with it, the sudden jump in capability might actually look like a singular-derivative step function of many orders of magnitude.
I've only looked at his introductory post, so hopefully he addresses my point later, but the introductory post would seem to be the natural place to discuss why we don't have more amendment, and he does some discussion of that question, but with what I feel is only one of the multiple answers:
"...you need about 85%+ public support to ratify a constitutional amendment. It’s pointless because, if you could ever get that much public support for your divisive policy question, you’d no longer need a constitutional amendment, because you’d have won the argument and all the relevant laws already."
This is true for many object-level laws, but there are loads of exceptions. An Amendment allows you to credibly precommit to not change laws later, which makes it attractive for a number of tasks:
- Rules intended to protect human rights, where we fear our descendants might backslide enough to repeal a mere law but not enough to overturn an amendment.
- Rules intended to be compromises via universalizing principles, for which a law isn't enough to enforce the compromise. If I hate being unable to condemn some right-wing ideology and you hate being unable to condemn some left-wing ideology, I might hate the thought of losing my freedom to censors half the time more than I relish the thought of the same happening to you the other half of the time, and something like the First Amendment is a win for both of us, even if we couldn't get a coalition to protect either ideology alone. In a bad enough Culture War making such a principle into law may feel like it's just giving the other side a chance to get a 4+ year head start on attacking us again when they repeal the law first while they're in power, but an amendment might have more teeth.
- Rules which cover the biggest meta-level questions of how the mechanisms of government should work, the cases where the constitution already specifies a mechanism that can't be overridden by a mere law. The House Rules Committee can do a lot, but it can't reduce the requirements for overriding a Presidential veto (his proposal #1), or expand its size to 11,000 (his #4), etc.
And pretty much every one of his proposals falls into category 3 here, doesn't it? He's not suggesting a "Write the Roe v Wade penumbras into the umbra" amendment, or a "define personhood as starting with conception" amendment; all his stuff is procedural at a high enough level that you can't do it without an Amendment.
So ... why don't we do any of those Amendments, either, anymore? I'd say it's a combination of our increasing political polarization with the realization that, so long as we're trapped by Duverger's Law into a two-party system, every meta-level change is also a potential change in the equilbrium point of that system, a zero-sum game. Either more easily overridden vetos will mostly help the Democrats, in which case you're not going to get a supermajority because you can't persuade enough of the Republican-leaning half of the country to agree, or they will mostly help the Republicans, in which case you're not going to get a supermajority because you can't persuade enough of the Democratic-leaning half of the country to agree. Perhaps at some point we'll have enough people sick of both parties that that will be a voting block worth catering to? But until then this is all a sadly academic discussion.
- Prev
- Next

"Just walk away." is the first thing I think of ... I can't believe it didn't make the Trope page!
"He's pretty good!"
More options
Context Copy link