domain:kvetch.substack.com
Now, I've had a few people acknowledge this point, and accept that, sure, some asymptotic limit on the real-world utility of increased intelligence probably exists. They then go on to assert that surely, though, human intelligence must be very, very far from that upper limit, and thus there must still be vast gains to be had from superhuman intelligence before reaching that point. Me, I argue the opposite. I figure we're at least halfway to the asymptote, and probably much more than that — that most of the gains from intelligence came in the amoeba → human steps, that the majority of problems that can be solved with intelligence alone can be solved with human level intelligence, and that it's probably not possible to build something that's 'like unto us as we are unto ants' in power, no matter how much smarter it is. (When I present this position, the aforementioned people dismiss it out of hand, seeming uncomfortable to even contemplate the possibility. The times I've pushed, the argument has boiled down to an appeal to consequences; if I'm right, that would mean we're never getting the Singularity, and that would be Very Bad [usually for one or both of two particular reasons].)
This seems like a potentially interesting argument to observe play out, but it also seems close to a fundamental unknown unknown. I'm not sure how one could meaningfully measure where we are along this theoretical asymptote in relationship between intelligence and utility, or that there really is an asymptote. What arguments convinced you both that this relationship would be asymptotic or at least have severely diminishing returns, and that we are at least halfway along the way to this asymptote?
I should note that Japan too has recently discovered the joys of ActuallyIndians. If you go to any convenience store or quite a lot of chain restaurants, all the staff are Indian now and have been for several years. Maybe since Covid?
It’s less so outside Tokyo but I imagine that’s a matter of time.
First, this seems entirely unprincipled given that NATO (and its proxies) relies on conscription ultimately.
Second, I see here no reason to believe that AI or any sort of productivity improvement changes the base reality that it is people who exist who shape society. Japan's automation strategy is a pragmatic mitigation but doesn't change the destination of their society.
What the hell kind of twisted definition of "flourishing" are we using here that people being so secure and domesticated they won't have children counts? It's a zoo you're building.
Why is this a post in the culture war thread? You went to a store that you had a negative predisposition towards, had an encounter with an employee who you considered rude and unattractive, a fact you decided to share for some inexplicable reason. You predictably were shunned because you came with the express purpose of not purchasing anything and wasting the service workers' time.
I have no idea why your insults or opinion that 'starbucks was a shit-ass place full of soulless NPCs' are at all relevant here, nor do I agree with your hastily drawn conclusions about supposed 'curiosity' not being rewarded. If you have to question the point of your post, perhaps that is a sign it does not belong.
In fairness, given your approach, it's possible she mistook you for a journalist.
Gregory Clark on horses and the automobile comes to mind here:
There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed, and it certainly did not pay enough to breed fresh generations of horses to replace them.
And as others have pointed out in reference to this, domestic horses in the modern day do live much more comfortable lives than those workhorses of old… but there's a whole lot fewer of them around.
My guess is that due to whatever labour laws and corporate rules, they can do the shirt protest but can't talk about it with customers. She probably assumed you were some kind of spy from corporate trying to get her fired.
I don't really follow what the woman you met being obtuse has to do with anything. Surely, if there's a nation-wide strike and she wasn't part of it, she's by definition not representative of the average Starbucks employee?
Yes. Classic WoW has a lot of dynamics to it that keep people playing against their will, sort of. In some real sense, your guild has invested in you, and your character, by taking you along, giving you loot and such, so you feel like you owe them your participation, so that your friends can get their rewards too, and your group can keep progressing.
Probably better for Bubbles long term, but it was shitty the way it went down that night. She probably cried. Like getting dumped by your long term friend group and finding out half of them never liked you and were talking about you behind your back
I think the problem is that:
- We (often) bring them in to fill specific shortages, enduring the larger problems that arise (loss of cultural integrity, lowered trust, often high long-term welfare costs) because we need those shortages fixed no matter what.
- There is no incentive for them to continue fixing those shortages after they get a long-term visa.
- The shortages then remain unfilled, so we bring in more immigrants. Meanwhile the long-term consequences are getting more and more severe.
Okay, but is there an economic difference?
Yeah, Blossom was a unicorn. It's pretty rare to find someone with that combination of personality and skill. She lived somewhere exotic IIRC, like Hawaii or Alaska or something, so her internet wasn't amazing, and Classic WoW was one of the few games that was forgiving enough of network latency.
I cut Buttercup some slack because she had a tough job IRL (nurse, I think), and playing a healer in game is suffering, as you desperately try and keep people alive through their own mistakes, gradually failing at it. And yeah, maybe some insecurity
This incident was probably the inflection point in my enthusiasm for Classic. I stuck it out a while longer out of loyalty to the group, and we eventually cleared Lich King 25 heroic, widely considered the point at which you have beat the game, then quit. It was that, and other similar incidents, that made me realize the juice wasn't worth the squeeze. The game mechanics naturally led to that sort of conflict, and it just didn't have to be that way. I didn't have to play that kind of game. Would be better if I didn't. Is better now that I don't.
That stat doesn’t say anything about the five year trick. Or about Poles. Wait, it’s not even limited to migrants! This is like using the African-American unemployment rate to say that black immigrants are actually planning to quit. That’s not true for the U.S. and I would like to see better data for the U.K.
But let’s assume that 10.7% of Pakistani migrants are in fact arriving, cleaning bedpans for five years, then quitting to live off the King’s largesse. Why aren’t native-born Brits doing the same thing? To me, that suggests it’s not actually a good deal for anyone raised to expect a first-world standard of living. That’s exactly the kind of arbitrage @MadMonzer is talking about.
Thank you very much for this post. Your three-question analysis really helps highlight my differences with most people here on these issues, because I weight #2 being "no" even higher than you do (higher than I do #1, which I also think is more likely "no" than "yes").
That said, I'd like to add to (and maybe push back slightly) on some of your analysis of the question. You mostly make it about human factors, where I'd place it more on the nature of intelligence itself. You ask (rhetorically):
We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence?
And my (non-rhetorical) answer is no, we shouldn't expect that at all, because of diminishing returns.
Here's where people keep consistently mistaking my argument, no matter how many times I explain: I am NOT talking about humans being near the upper limit of how intelligent a being can be. I'm talking about limits on how much intelligence matters in power over the material world.
Implied in your question above is the assumption that if entity A is n times smarter than B (as with, say, humans and animals) then it must be n times more powerful; that if a superhuman intelligence is as much smarter than us as we are smarter than animals, it must also be as much more powerful than us than we are more powerful than animals. I don't think it works that way. I expect that initial gains in intelligence, relative to the "minimally-intelligent" agent provide massive gains in efficacy in the material world… but each subsequent increase in intelligence almost certainly provides smaller and smaller gains in real-world efficacy. Again, the problem isn't a limit on how smart an entity we can make, it's a limit on the usefulness of intelligence itself.
Now, I've had a few people acknowledge this point, and accept that, sure, some asymptotic limit on the real-world utility of increased intelligence probably exists. They then go on to assert that surely, though, human intelligence must be very, very far from that upper limit, and thus there must still be vast gains to be had from superhuman intelligence before reaching that point. Me, I argue the opposite. I figure we're at least halfway to the asymptote, and probably much more than that — that most of the gains from intelligence came in the amoeba → human steps, that the majority of problems that can be solved with intelligence alone can be solved with human level intelligence, and that it's probably not possible to build something that's 'like unto us as we are unto ants' in power, no matter how much smarter it is. (When I present this position, the aforementioned people dismiss it out of hand, seeming uncomfortable to even contemplate the possibility. The times I've pushed, the argument has boiled down to an appeal to consequences; if I'm right, that would mean we're never getting the Singularity, and that would be Very Bad [usually for one or both of two particular reasons].)
ChatGPT suggests this: https://conversationswithtyler.com/episodes/patrick-collison/.com
Speaking as someone who is against the draft, I am also against forcing women into performing an equivalent sacrifice.
We're in the age of automation and exponential productivity growth. Surely the solution is simply to guarantee security and flourishing for everyone. I cannot imagine any version of the world where solving that engineering problem is actually harder than convincing millions of women to sacrifice their security.
For goodness sake, we're already most of the way there!
As for being conquered, I'm willing to bet everything on NATO. A planet-spanning military alliance that spends more on weapons than the rest of the world combined will not be overcome so easily. China might get Taiwan back, but they're not going to land troops in San Francisco any time soon. In the long run, AI will change the nature of the game in a way that makes population dynamics obsolete long before any power rises that can credibly challenge NATO.
The idea of technological determinism (of which "when technological changes to economics says we don't need these people, ethics will evolve to agree" would be an example) is still a pretty controversial one, I think, for lots of both bad and good reasons.
Marx was a huge early booster of technological determinism, and other ideas among Marx's favorites were so genocidally foolish that we should default to being skeptical in individual cases, but it's not proven that every idea of his was a bad one. He also didn't apply the idea very successfully, but perhaps that's just not easy for people whose foolishness reaches "death toll" levels.
There are some cases where trying to apply the idea seems to add a lot of clarity. The emergence of modern democracies right around the time that military technology presented countries with choices like "supplement your elite troops with vastly larger levies of poor schlubs with muskets" or "get steamrollered by Napoleon" sure doesn't sound like a coincidence. But, it's always easier to come up with instances and explanations like that with hindsight rather than foresight. Nobody seems to have figured out psychohistory yet.
There are also some cases where trying to apply the idea doesn't seem to add so much clarity. Africans with mostly spears vs Europeans with loads of rifles led to colonialism, chalk one up for determinism, but then Africans with mostly rifles vs Europeans with jets and tanks wasn't a grossly more even matchup and it still ended up in decolonization. These days we even manage to have international agreement in favor of actually helpless beneficiaries like endangered species. Perhaps World War 2 just made it clear that "I'm going to treat easy targets like garbage but you can definitely trust me" isn't a plausible claim, so ethics towards the weak are a useful tool for bargaining with the strong? But that sounds like it might extend even further, too. To much of the modern world, merely keeping-all-your-wealth-while-poor-people-exist is considered a subset of "treating easy targets like garbage", and unless everybody can seamlessly move to a different Schelling point (libertarianism might catch on any century now), paying for the local powerless people's dole from a fraction of your vast wealth might just be a thing you do to not be a pariah among the other people whose power you do care about. If population was still booming, the calculation of net present value of that dole might be worrisome (let's see, carry the infinity...), but so long as the prole TFR stays below replacement (or at least below the economic growth rate), their cost of living isn't quite as intimidating.
That theory sounds like just wishful thinking about the future, but to be fair a lot of recent history sounds like wishful thinking by older historical standards.
This is all wildly speculative, of course, but so is anything in the "all-powerful and perfectly obedient machinery" future. I stopped in the middle of writing this to help someone diagnose a bug that turned out to be coming from a third party's code. Fortunately none of this was superintelligent code, so when it worked improperly it just trashed their chemical simulation results, not their biochemistry.
Women date and, to a lesser extent, marry and reproduce with lots of untrustworthy men. That doesn't mean that the men they don't date are trustworthy, but it does suggest that trustworthiness isn't the primary blocker. And if you're a man who can't get a date and wants one, it's better to focus on changing other aspects of yourself than some fuzzy concept of trustworthiness. Those other aspects being those that fall into the broad category of attractiveness, almost tautologically.
The Americans with Disabilities Act of 1990 is probably the single worst law since the Civil Rights Act of 1964.
That they are foreigners.
Those people are not trustworthy, they're untested.
This seems like it's veering towards a No True Scottsman sort of thing. As in "if women don't want to be around you, it's clear they're not at ease in your presence, which is what trustworthiness means, therefore you weren't trustworthy to begin with". We can generally infer "trustworthiness" by how people act in other areas of their life, if they follow the rules and don't cheat, etc. Of course men could behave differently in contexts that involve women, but we'd generally expect a pretty strong correlation. Yet there are plenty of men who are trustworthy in other areas often don't find much success in love.
Here's my own personal take of what it takes to be successful with women:
- Be attractive, and don't be unattractive. This is like 50-75% genetic, but you can put in an effort to change yourself or at least present yourself in the best light. Physical attractiveness is the bedrock that everything else is built off of and if you have it then everything will be far far easier. If you don't, then it will be much harder.
- Have the right personality. There's a lot that of factors here, but in a nutshell it's that you want to be the guy who is "fun at parties", i.e. charismatic, funny, confident, spontaneous, has social proofing, that sort of thing.
Being "reliable" isn't a bad thing, but I wouldn't say it's an overriding concern most of the time. Perhaps a lack of reliability could be seen as sufficiently negative that a girl who would date a guy wouldn't want to marry him, but I've never seen it be a proactive concern beyond that.
This is certainly not ideal when it comes to having (especially many) kids. The biological window is limited (not only for women). But it is a perfectly rational choice of action for women if you want to mitigate the risk of having a terrible husband who will not treat you well.
But that's my point. This desire for safety is antisocial.
And before we start arguing that this is an unreasonable or special demand, let me remind you that men can still, to this day, be forced to fight and die for society.
If you want your society to continue to exist, you're going to have to sacrifice some comfort and take some risks to make sure that there is a next generation of your people. Or we can just live in anarchy and have no loyalties to each other until we get conquered by more sensible people. I so far see no reason to believe there is an alternative.
I stopped going to Starbucks when they changed their concealed carry policy years ago (2013). Still get Seattle's Best occasionally at restaurants when that's all they serve. Chick fil a is the only place I go for the hope of anything resembling service.
We never figured out how birds or bees fly for our own flying machines
I like this analogy. I wonder why I haven't heard it more often when people talk about LLMs being glorified autocomplete.
The hard work is already done, we already found the breakthroughs we need and now just need to apply more inputs to get massively superhuman results
I really don't think it's just a scaling problem in its entirety. I find it plausible that scaling only gets us marginally more correct answers. Look at how disappointing ChatGPT 4.5 was despite its massive size.
I believe by 2027 the doubters should be silenced one way or another.
If you're going by Scott's 2027 article, it says that little of real note beyond iterative improvements happen until 2027, and then 2027 itself is supposed to be the explosion. Then they claim in some of the subarticles on that site that 2027 is really their earliest reasonable guess, and that 2028 is also highly plausible, but also 2029-2033 aren't unreasonable.
The issue with FOOM debates is that a hard takeoff is presumed to always be right around the corner, just one more algorithmic breakthrough and we could be there! I feel like Yud is set up in a position to effectively never be falsified even if we get to 2040 and AI is basically where it is now.
Can you rephrase the first bit of this? I'm not following.
More options
Context Copy link