@curious_straight_ca's banner p

curious_straight_ca


				

				

				
1 follower   follows 0 users  
joined 2022 November 13 09:38:42 UTC

				

User ID: 1845

curious_straight_ca


				
				
				

				
1 follower   follows 0 users   joined 2022 November 13 09:38:42 UTC

					

No bio...


					

User ID: 1845

I agree there's a lot to think about there. But the materialist narrative isn't at all dishonest, I think it's clear how an honest and discerning person would come to believe that, even if wrong. It's very easy to be horrifically wrong without being dishonest.

Some games have strong social / political components, and the people who are good at those tend to be very sociable. Team shooters still practice your ability to communicate, plan, improve as a group.

I didn't claim we'd get space communism or that it'd go how any of the AI people expect it will.

I'm just claiming that AI is going to be a major factor in ways that you're probably not accounting for. Why can't AI have its own agency and take world-reshaping actions just like humans do?

People have the dream of the fairy godmother machine that will mean we don't have to work and will be rich and comfortable and the machine will solve all our problems for us, I don't think that's ever going to happen.

The machine can be smarter and more capable from us and take power from us, though.

We got here - metal towers that scrape the sky, man's foot touching the moon, seeing the faces and hearing the voices of men ten thousand miles away, a billion peoples' labor acting in a decentralized yet coordinated dance - purely by human intelligence and capability. The specific structures of morality, governance, economy, and society that we imagine are fixed were created by us and for our purposes. They have changed, and they will change.

If we create something smarter than us, why won't it do the same - create its own structures, that wn't involve us? Now, you describe accurately what microsoft wants. But Microsoft doesn't get everything it wants. And microsoft only wants what it wants because those specific social and material technologies make them powerful. What makes microsoft want money or powerful computers? They lend Microsoft and employees power, influence, capability. What'll give Microsoft even more of that? Creating AGI. Giving AGI more power and influence. And then the AGI can, uh, do that itself. And then?

500 years ago, "capitalism" and "computers" didn't exist. Why do you expect computers and capitalism to last for another 500 years, "just as it's always been". "consumerism" and "profitable joint stock corporations", that's just how it is, haha. Nothing caused that, and whatever caused it certainly can't change, we can't lose our position as kings of the world.

I'm not sure what you mean? He cares about this insofar as so many people in America care about it - as it illuminates how harmful democracy and progressivism and caring too much about the poor oppressed foreigners is - not for the conflict itself. He wouldn't be posting about it if everyone else wasn't too.

Yes. But that's because I don't believe all the hopes/fears about Fairy Godmother AI and Paperclippers. We'll get machine intelligence of a kind, but we won't get Colossus or HAL or the Culture AIs

The argument is incredibly compact. Do you believe that 1) computers can't have the intelligence and independent action of humans, despite obvious material paths to accomplishing that we currently are aggressively pursuing or that 2) we won't unleash that intelligence and independent action, despite the truly enormous potential individual and collective benefits of doing so?

Like, a million years ago there weren't humans (homo sapien). We evolved. Whether or not you believe in god, the fossil record and DNA clearly demonstrates that. Imagine a million years from now. If we create things smarter and more capable than ourselves, why won't they end up on top in a million years, in the same way we did?

And how long does it look like it'll take? A thousand seems more plausible than a million, given computers weren't a thing 200 years ago. A hundred or two seems more plausible than a thousand. And suddenly it's an issue for your grandchildren, at least.

A reasonable question, and an important one, but not one I really want to discuss right now tbh.

I think it's not too relevant to the point that we have a lot of evidence there's not a heaven with jesus and angels and the happy souls of all the do-gooders that we didn't have a thousand years ago. Whether there's something non-mechanistic going on with the universe - important, tied up in why people are so attracted to things like Christianity, but still doesn't prove Christianity true.

Oh, Science, blah blah

Evolution by natural selection is easily the most important 'theological' thing to ever happen, it (together with history) explains every impulse that God is claimed to have given to man by independent choice. Every unexplainable natural phenomenon used to be attributable to God, and his role today in that front is minimal due to science - even today's Christians still claim various modern miracles (and if you investigate one of them deeply enough, it inevitably collapses). Like, how does Christianity relate to AGI? It doesn't! Does this mean AGI won't happen?

But that does not mean that someone proposing a 'decoupled' idea is right;

It means that some of them are in some parts right, and if you don't decouple you'll not be able to notice that

Eh, taboos are only partially arbitrary. "Weird person having sex with my child" is something people have strong instincts against, and "anyone having sex with any child" is the best fit in today's world. I think there'd always be some taboo around it.

Okay, teleport back to the 1700s. You're a Christian. A high decoupler invites you to a talk about how to assess the historicity of the bible. Do you accept?

On the one hand, yeah, it's a trap to convince you to be a non-denominational Deist damn you to hell for eternity and expel you from polite society.

On the other hand, the person's more right than they know, because the Christian God actually isn't real!

The ideologies and material practices of the next 50 years will be different from those of today, in ways that will necessarily not be emotionally 'coupled' in the way today's issues are. By refusing to 'decouple', you're covering your ears as the world changes around you.

my median timeline for grossly superhuman AGI is like 3 or 5 years at this point

Eh, I feel like grossly superhuman is at least 8-10. Look at how much progress has been made in intelligent text generation since gpt4? Not that that's particularly comforting morally or philosophically.

I mean, the high fertility in conservative religions is caused by the sacred value placed on reproduction, omnipresent social pressure for reproduction, not allowing sex outside the context of reproduction, and social institutions that help support the raising of children. I don't see what this has to do with 'self-love'. Progressives who claim to have piles of self-love still don't have children because they don't value it, and they aren't pressured to do so, and it doesn't mesh with their work lives, and so on. Men in tribes that are 'hyper-competitive' in the sense that they had incredibly high rates of violence still had as many children as they could.

I want to say that it's still a skill issue, and the reasons aren't actually that complicated and are accessible and influenceable to smart people with a bit of effort. That's true in a deep sense, but it's also true that the history of philosophy and psychology in adjacent areas is mostly series of hilarious failures at understanding human motivation and thought by very intelligent and subtle people, so it probably is going to be very challenging for even above-average people.

I mean, the most likely scenario is she has concrete reasons she's not telling you, and those reasons are more like 'vibes' and are poor approximations of the load-bearing causes.

Also, though, if you revealed all of the tells you use to read someone, it makes it a lot easier for them to fake it later. Plus, I've heard women say that when they've given the men the honest reasons they didn't make it, the men react negatively.

whether people are raised with a language that distinguishes between two colours are able to identify them better/faster, classically with blue/green

Yeah, it's the ultraweak sapir-whorf hypothesis (language influences some things a little bit), you have more practice distinguishing between colors with words for them so you do it faster. But you could, with practice, distinguish colors with words you currently know or non-verbal color classes just as well (and presumably artists or designers would). It doesn't support what people imagine sapir-whorf means, like, there are categories built into your mind that language creates that deeply restrict or guide the way you think. Which is mostly false imo, you can perfectly well learn things you don't have words for, the restriction is more knowledge and experience generally, which is significantly less faux-profound.

Countries aren't houses? If someone who live a city away wants no shoes in his house, that's his choice (i'm fine with it). If he wants his wife to wear a veil, whatever. If he wants to live in a big house with his extended family, is that my problem?

Sure, some cultural issue are important - we might not want to import people from honor cultures who settle disputes with violence or don't want to get educated without careful consideration and pushes for assimilation. But your analogy didn't point to that, it asserted both a right and a positive good to deny immigrants for entirely arbitrary reasons. Which seems dumb? My ancestors, and yours, likely had all sorts of cultural clashes and broken taboos against the natives when they came, but it's still nice that they did.

By "consent" you mean "consent (correctly)", which means you're independently judging there's a non-consent reason the child shouldn't be having sex, which is the reason the child shouldn't have sex. Why not just say '14 year olds shouldn't have sex with 18 year olds for '? Why say they 'can't consent'?

So as you can see, I'm not hiding utilitarian analysis anywhere up my sleeves.

I'm saying you got your current morals from past individuals or evolutionary processes that were actively making those calculations. That you're only doing them partially or a flawed way doesn't change that's where they emerged.

From where I sit it looks like you have an intrinsic moral reaction against slavery, and the complexity of the calculation is just there to hide the fact.

I have almost no intrinsic moral reaction against slavery! I'm rather reactionary by instinct, at this point. I'm still more or less against slavery because it seems pointless.

Should we promote surrogacy if we can guarantee that outcomes are "good", or should we do everything we can to limit it, even if it meant [insert catastrophe of your choice]?

I should've slotted in 'evolution' along with 'decentralized system of humans'. Yes, we should do that, and natural selection did do that when it created inborn moral instincts. At one point in the past, internal fertilization was unnatural, a freak accident of nature that one deformed organism happened to have, with all sorts of awful second order side effects. Except it turned out internal fertilization was beneficial for survival and capability, and it spread, and then the second-order effects were worked out, and now it's natural and good and we can't imagine anything else. *(I know natural selection is more complicated than that, but the argument still applies to however internal fertilization really evolved.)

Like I pointed out above, you're not going to get utilitarians to endorse slavery, just because it increases utils

I'm defending morality being contingent and relying on complex calculation. I'm not defending universalist egalitarian hedonic utilitarianism. I also don't have an infinitely strong principle against slavery!

That by doing advanced calculus on harm and good you will arrive at a moral outcome

Okay, I'm in a town of 1000 people. One person has a factory that produces very nice shoes, but fills the air with smog. The smog makes everyone choke and gives everyone lung cancer. Should we forcibly close down the factory?

Now I'm in the same town, but instead of a factory, it's a slaughterhouse. The stench smells about as bad as the smog, but it doesn't cause lung cancer. Also, it provides much of the food for the town. Should we forcibly close down the factory?

The answer is yes in the first case, no in the second case. One comes to this conclusion by, uh, doing calculations on the outcome. The first has lower benefit, higher cost, the second has higher benefit, lower cost. How else can you come to this kind of conclusion, if not by doing calculations on harm and good

Maybe during the industrial revolution the air and water had to be a bit polluted because the only other option was no industry, but now we have better technology and can have industry with less pollution. Any rule in deontology or virtue ethics about how to make that decision just ends up deferring to the calculation of benefit.

Like, people exist, benefits and harms exist, actions lead to outcomes in incredibly complicated ways, whether you're a socialist or liberal or conservative or a nazi you need to judge actions based on their outcomes, and the calculations are complicated because the situations are complicated. Should we have a democracy or a monarchy? Under what conditions should we go to war? Should we have computers? Should we create advanced AI? Nonconsequentialist moral systems dodge these by taking the answers for granted and treating them as 'rules' or 'virtues'. But the virtues/rules themselves embed complexity that represents a calculation that some human, or perhaps a decentralized system of humans, made in the past.

Do you object to sum-of-utils, hedonism, consequentialism, or just the idea of a systematic universal set of moral values in the first place?

The proposed linkage between the latest controversial aella tweet and large-scale social strife is somewhat tenuous?

People said much worse 50 years ago, whether for shock value or of genuine belief, and nothing too terrible happened.

You're right that advancing technology is going to, first strain, then snap, current moral values eventually, but there are probably better examples than something child something sex.

I am not defending "Trust The Experts" as said by people on covid. (But I am not convinced that the, uh, non-experts (?) would've done better averaged over a large number of scenarios) I am defending the enormous value of the group of people the "experts" generally refers to!

From above:

me: But, for almost everyone in the general population, and for many people here, your vague guesses and anecdotal impressions are a lot worse. It takes effort, discipline, and raw intelligence to do better! arjin: It does, and the replication crisis shows that systemically they aren't using either.

Depending on the field somewhere between 1/3 and 2/3ths of the 'experts' are 'systemically' using their discipline, effort, and intelligence to be productive and useful.

My original comment:

The experts are wrong a lot. But, for almost everyone in the general population, and for many people here, your vague guesses and anecdotal impressions are a lot worse. It takes effort, discipline, and raw intelligence to do better! It's reasonable to not trust that everything's going great just because the economy numbers are up! But you shouldn't just jump from 'my costs feel higher than a few years ago' to 'clearly experts are wrong', dig deeper!

Which I stand by! The experts are wrong a lot, but the question isn't if you trust them, it's who to trust when or how to figure out yourself how to do better. The former is quite hard and the latter is even harder.

For example, I almost never see a justification being given for why a certain expert is any good.

I see it constantly? You're appealing to an apparently omnipresent media or discourse, and more accurately specific segments of it you take issue with. Which media, which discourse, etc? I agree that a lot of it has very big deep problems, but I think a combination of a vague 'it' / 'them' being criticized and a lack of appreciation for the positives of the system leads to critique that doesn't really mean anything.

Hm, sometimes one has thoughts that are stupid, doesn't think about them too much, and writes them on the internet. Grandparent was one of those times. Sorry!

Finally, and this is probably 100% placebo, but am I the only person who feels like they can snap themselves into adderall-esque state (with out the fun / high feeling)?

At least two! It's probably a somewhat different mechanism, but I can just ... do that. I generally find the attitude that 'I don't want to do X but will take a pill to make me more motivated' disturbing. You do have the capability to be motivated at X if you desired it, you're just in a weird state where you want it in some abstract or socialized sense but not deeply enough to actually do it.